Nov 5 15:01:42.577195 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 5 15:01:42.577250 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 15:01:42.577277 kernel: KASLR disabled due to lack of seed Nov 5 15:01:42.577296 kernel: efi: EFI v2.7 by EDK II Nov 5 15:01:42.577312 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 5 15:01:42.580044 kernel: secureboot: Secure boot disabled Nov 5 15:01:42.580072 kernel: ACPI: Early table checksum verification disabled Nov 5 15:01:42.580089 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 5 15:01:42.580105 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 5 15:01:42.580132 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 5 15:01:42.580184 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 5 15:01:42.580201 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 5 15:01:42.580218 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 5 15:01:42.580234 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 5 15:01:42.580260 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 5 15:01:42.580277 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 5 15:01:42.580295 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 5 15:01:42.580312 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 5 15:01:42.580330 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 5 15:01:42.580347 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 5 15:01:42.580365 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 5 15:01:42.580404 kernel: printk: legacy bootconsole [uart0] enabled Nov 5 15:01:42.580422 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 15:01:42.580441 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 15:01:42.580465 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 5 15:01:42.580482 kernel: Zone ranges: Nov 5 15:01:42.580499 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 5 15:01:42.580515 kernel: DMA32 empty Nov 5 15:01:42.580532 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 5 15:01:42.580549 kernel: Device empty Nov 5 15:01:42.580565 kernel: Movable zone start for each node Nov 5 15:01:42.580582 kernel: Early memory node ranges Nov 5 15:01:42.580599 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 5 15:01:42.580615 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 5 15:01:42.580632 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 5 15:01:42.580648 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 5 15:01:42.580670 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 5 15:01:42.580687 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 5 15:01:42.580704 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 5 15:01:42.580720 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 5 15:01:42.580744 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 15:01:42.580766 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 5 15:01:42.580784 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 5 15:01:42.580801 kernel: psci: probing for conduit method from ACPI. Nov 5 15:01:42.580818 kernel: psci: PSCIv1.0 detected in firmware. Nov 5 15:01:42.580836 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 15:01:42.580853 kernel: psci: Trusted OS migration not required Nov 5 15:01:42.580870 kernel: psci: SMC Calling Convention v1.1 Nov 5 15:01:42.580888 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 5 15:01:42.580906 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 15:01:42.580928 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 15:01:42.580947 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 5 15:01:42.580965 kernel: Detected PIPT I-cache on CPU0 Nov 5 15:01:42.580982 kernel: CPU features: detected: GIC system register CPU interface Nov 5 15:01:42.581000 kernel: CPU features: detected: Spectre-v2 Nov 5 15:01:42.581017 kernel: CPU features: detected: Spectre-v3a Nov 5 15:01:42.581035 kernel: CPU features: detected: Spectre-BHB Nov 5 15:01:42.581052 kernel: CPU features: detected: ARM erratum 1742098 Nov 5 15:01:42.581069 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 5 15:01:42.581087 kernel: alternatives: applying boot alternatives Nov 5 15:01:42.581106 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:01:42.581130 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:01:42.581184 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:01:42.581204 kernel: Fallback order for Node 0: 0 Nov 5 15:01:42.581222 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 5 15:01:42.581240 kernel: Policy zone: Normal Nov 5 15:01:42.581258 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:01:42.581276 kernel: software IO TLB: area num 2. Nov 5 15:01:42.581294 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Nov 5 15:01:42.581313 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:01:42.581332 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:01:42.581361 kernel: rcu: RCU event tracing is enabled. Nov 5 15:01:42.581380 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:01:42.581399 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:01:42.581418 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:01:42.581436 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:01:42.581454 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:01:42.581472 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:01:42.581490 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:01:42.581508 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 15:01:42.581525 kernel: GICv3: 96 SPIs implemented Nov 5 15:01:42.581544 kernel: GICv3: 0 Extended SPIs implemented Nov 5 15:01:42.581566 kernel: Root IRQ handler: gic_handle_irq Nov 5 15:01:42.581583 kernel: GICv3: GICv3 features: 16 PPIs Nov 5 15:01:42.581601 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 15:01:42.581618 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 5 15:01:42.581636 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 5 15:01:42.581655 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 5 15:01:42.581673 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 5 15:01:42.581691 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 5 15:01:42.581709 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 5 15:01:42.581727 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 5 15:01:42.581745 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:01:42.581767 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 5 15:01:42.581786 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 5 15:01:42.581804 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 5 15:01:42.581822 kernel: Console: colour dummy device 80x25 Nov 5 15:01:42.581841 kernel: printk: legacy console [tty1] enabled Nov 5 15:01:42.581860 kernel: ACPI: Core revision 20240827 Nov 5 15:01:42.581879 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 5 15:01:42.581897 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:01:42.581919 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:01:42.581938 kernel: landlock: Up and running. Nov 5 15:01:42.581955 kernel: SELinux: Initializing. Nov 5 15:01:42.581973 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:01:42.581992 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:01:42.582010 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:01:42.582028 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:01:42.582047 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:01:42.582069 kernel: Remapping and enabling EFI services. Nov 5 15:01:42.582087 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:01:42.582105 kernel: Detected PIPT I-cache on CPU1 Nov 5 15:01:42.582123 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 5 15:01:42.582176 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 5 15:01:42.582201 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 5 15:01:42.582219 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:01:42.582244 kernel: SMP: Total of 2 processors activated. Nov 5 15:01:42.582263 kernel: CPU: All CPU(s) started at EL1 Nov 5 15:01:42.582291 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 15:01:42.582313 kernel: CPU features: detected: 32-bit EL1 Support Nov 5 15:01:42.582332 kernel: CPU features: detected: CRC32 instructions Nov 5 15:01:42.582350 kernel: alternatives: applying system-wide alternatives Nov 5 15:01:42.582371 kernel: Memory: 3822956K/4030464K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 186164K reserved, 16384K cma-reserved) Nov 5 15:01:42.582390 kernel: devtmpfs: initialized Nov 5 15:01:42.582413 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:01:42.582432 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:01:42.582451 kernel: 23536 pages in range for non-PLT usage Nov 5 15:01:42.582470 kernel: 515056 pages in range for PLT usage Nov 5 15:01:42.582488 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:01:42.582511 kernel: SMBIOS 3.0.0 present. Nov 5 15:01:42.582529 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 5 15:01:42.582548 kernel: DMI: Memory slots populated: 0/0 Nov 5 15:01:42.582567 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:01:42.582586 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 15:01:42.582605 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 15:01:42.582624 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 15:01:42.582647 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:01:42.582666 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Nov 5 15:01:42.582685 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:01:42.582704 kernel: cpuidle: using governor menu Nov 5 15:01:42.582723 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 15:01:42.582741 kernel: ASID allocator initialised with 65536 entries Nov 5 15:01:42.582760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:01:42.582783 kernel: Serial: AMBA PL011 UART driver Nov 5 15:01:42.582802 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:01:42.582820 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:01:42.582839 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 15:01:42.582857 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 15:01:42.582876 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:01:42.582895 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:01:42.582918 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 15:01:42.582937 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 15:01:42.582956 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:01:42.582974 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:01:42.582993 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:01:42.583012 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:01:42.583030 kernel: ACPI: Interpreter enabled Nov 5 15:01:42.583053 kernel: ACPI: Using GIC for interrupt routing Nov 5 15:01:42.583072 kernel: ACPI: MCFG table detected, 1 entries Nov 5 15:01:42.583091 kernel: ACPI: CPU0 has been hot-added Nov 5 15:01:42.583110 kernel: ACPI: CPU1 has been hot-added Nov 5 15:01:42.583128 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 5 15:01:42.583559 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:01:42.583830 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 15:01:42.584105 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 15:01:42.584602 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 5 15:01:42.584900 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 5 15:01:42.584934 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 5 15:01:42.584956 kernel: acpiphp: Slot [1] registered Nov 5 15:01:42.584976 kernel: acpiphp: Slot [2] registered Nov 5 15:01:42.585004 kernel: acpiphp: Slot [3] registered Nov 5 15:01:42.585024 kernel: acpiphp: Slot [4] registered Nov 5 15:01:42.585044 kernel: acpiphp: Slot [5] registered Nov 5 15:01:42.585064 kernel: acpiphp: Slot [6] registered Nov 5 15:01:42.585083 kernel: acpiphp: Slot [7] registered Nov 5 15:01:42.585102 kernel: acpiphp: Slot [8] registered Nov 5 15:01:42.585121 kernel: acpiphp: Slot [9] registered Nov 5 15:01:42.585185 kernel: acpiphp: Slot [10] registered Nov 5 15:01:42.585221 kernel: acpiphp: Slot [11] registered Nov 5 15:01:42.585243 kernel: acpiphp: Slot [12] registered Nov 5 15:01:42.585264 kernel: acpiphp: Slot [13] registered Nov 5 15:01:42.585283 kernel: acpiphp: Slot [14] registered Nov 5 15:01:42.585303 kernel: acpiphp: Slot [15] registered Nov 5 15:01:42.585322 kernel: acpiphp: Slot [16] registered Nov 5 15:01:42.585342 kernel: acpiphp: Slot [17] registered Nov 5 15:01:42.585367 kernel: acpiphp: Slot [18] registered Nov 5 15:01:42.585386 kernel: acpiphp: Slot [19] registered Nov 5 15:01:42.585405 kernel: acpiphp: Slot [20] registered Nov 5 15:01:42.585424 kernel: acpiphp: Slot [21] registered Nov 5 15:01:42.585443 kernel: acpiphp: Slot [22] registered Nov 5 15:01:42.585461 kernel: acpiphp: Slot [23] registered Nov 5 15:01:42.585481 kernel: acpiphp: Slot [24] registered Nov 5 15:01:42.585506 kernel: acpiphp: Slot [25] registered Nov 5 15:01:42.585526 kernel: acpiphp: Slot [26] registered Nov 5 15:01:42.585544 kernel: acpiphp: Slot [27] registered Nov 5 15:01:42.585563 kernel: acpiphp: Slot [28] registered Nov 5 15:01:42.585583 kernel: acpiphp: Slot [29] registered Nov 5 15:01:42.585602 kernel: acpiphp: Slot [30] registered Nov 5 15:01:42.585621 kernel: acpiphp: Slot [31] registered Nov 5 15:01:42.585639 kernel: PCI host bridge to bus 0000:00 Nov 5 15:01:42.586125 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 5 15:01:42.586429 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 15:01:42.586667 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 5 15:01:42.586892 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 5 15:01:42.587240 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:01:42.587569 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 5 15:01:42.587840 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 5 15:01:42.588131 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 5 15:01:42.588482 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 5 15:01:42.588745 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:01:42.589027 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 5 15:01:42.589336 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 5 15:01:42.589613 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 5 15:01:42.589866 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 5 15:01:42.590115 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:01:42.590408 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Nov 5 15:01:42.590685 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Nov 5 15:01:42.590942 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Nov 5 15:01:42.591236 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Nov 5 15:01:42.591550 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Nov 5 15:01:42.591805 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 5 15:01:42.592038 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 15:01:42.592351 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 5 15:01:42.592408 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 15:01:42.592432 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 15:01:42.592453 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 15:01:42.592475 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 15:01:42.592494 kernel: iommu: Default domain type: Translated Nov 5 15:01:42.592514 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 15:01:42.592542 kernel: efivars: Registered efivars operations Nov 5 15:01:42.592562 kernel: vgaarb: loaded Nov 5 15:01:42.592581 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 15:01:42.592600 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:01:42.592619 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:01:42.592638 kernel: pnp: PnP ACPI init Nov 5 15:01:42.592964 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 5 15:01:42.593003 kernel: pnp: PnP ACPI: found 1 devices Nov 5 15:01:42.593023 kernel: NET: Registered PF_INET protocol family Nov 5 15:01:42.593043 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:01:42.593063 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:01:42.593085 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:01:42.593105 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:01:42.593124 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:01:42.593218 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:01:42.593241 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:01:42.593261 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:01:42.593280 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:01:42.593300 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:01:42.593319 kernel: kvm [1]: HYP mode not available Nov 5 15:01:42.593339 kernel: Initialise system trusted keyrings Nov 5 15:01:42.595812 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:01:42.595842 kernel: Key type asymmetric registered Nov 5 15:01:42.595861 kernel: Asymmetric key parser 'x509' registered Nov 5 15:01:42.595881 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 15:01:42.595900 kernel: io scheduler mq-deadline registered Nov 5 15:01:42.595919 kernel: io scheduler kyber registered Nov 5 15:01:42.595939 kernel: io scheduler bfq registered Nov 5 15:01:42.596357 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 5 15:01:42.596419 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 15:01:42.596440 kernel: ACPI: button: Power Button [PWRB] Nov 5 15:01:42.596461 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 5 15:01:42.596481 kernel: ACPI: button: Sleep Button [SLPB] Nov 5 15:01:42.596501 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:01:42.596532 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 5 15:01:42.596842 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 5 15:01:42.596877 kernel: printk: legacy console [ttyS0] disabled Nov 5 15:01:42.596897 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 5 15:01:42.596916 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:01:42.596936 kernel: printk: legacy bootconsole [uart0] disabled Nov 5 15:01:42.596955 kernel: thunder_xcv, ver 1.0 Nov 5 15:01:42.596980 kernel: thunder_bgx, ver 1.0 Nov 5 15:01:42.596999 kernel: nicpf, ver 1.0 Nov 5 15:01:42.597018 kernel: nicvf, ver 1.0 Nov 5 15:01:42.597379 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 15:01:42.597647 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T15:01:38 UTC (1762354898) Nov 5 15:01:42.597677 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 15:01:42.597697 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 5 15:01:42.597727 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:01:42.597746 kernel: watchdog: NMI not fully supported Nov 5 15:01:42.597765 kernel: watchdog: Hard watchdog permanently disabled Nov 5 15:01:42.597784 kernel: Segment Routing with IPv6 Nov 5 15:01:42.597803 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:01:42.597821 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:01:42.597840 kernel: Key type dns_resolver registered Nov 5 15:01:42.597863 kernel: registered taskstats version 1 Nov 5 15:01:42.597882 kernel: Loading compiled-in X.509 certificates Nov 5 15:01:42.597902 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 15:01:42.597921 kernel: Demotion targets for Node 0: null Nov 5 15:01:42.597940 kernel: Key type .fscrypt registered Nov 5 15:01:42.597959 kernel: Key type fscrypt-provisioning registered Nov 5 15:01:42.597978 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:01:42.598000 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:01:42.598019 kernel: ima: No architecture policies found Nov 5 15:01:42.598038 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 15:01:42.598057 kernel: clk: Disabling unused clocks Nov 5 15:01:42.598076 kernel: PM: genpd: Disabling unused power domains Nov 5 15:01:42.598094 kernel: Freeing unused kernel memory: 12992K Nov 5 15:01:42.598113 kernel: Run /init as init process Nov 5 15:01:42.598176 kernel: with arguments: Nov 5 15:01:42.598205 kernel: /init Nov 5 15:01:42.598225 kernel: with environment: Nov 5 15:01:42.598243 kernel: HOME=/ Nov 5 15:01:42.598262 kernel: TERM=linux Nov 5 15:01:42.598282 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 5 15:01:42.598555 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 5 15:01:42.598779 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 15:01:42.598813 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:01:42.598834 kernel: GPT:25804799 != 33554431 Nov 5 15:01:42.598854 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:01:42.598873 kernel: GPT:25804799 != 33554431 Nov 5 15:01:42.598895 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:01:42.598915 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 15:01:42.598941 kernel: SCSI subsystem initialized Nov 5 15:01:42.598961 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:01:42.598980 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:01:42.599000 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:01:42.599020 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 15:01:42.599039 kernel: raid6: neonx8 gen() 6489 MB/s Nov 5 15:01:42.599058 kernel: raid6: neonx4 gen() 6440 MB/s Nov 5 15:01:42.599084 kernel: raid6: neonx2 gen() 5343 MB/s Nov 5 15:01:42.599104 kernel: raid6: neonx1 gen() 3924 MB/s Nov 5 15:01:42.599123 kernel: raid6: int64x8 gen() 3603 MB/s Nov 5 15:01:42.599177 kernel: raid6: int64x4 gen() 3602 MB/s Nov 5 15:01:42.599200 kernel: raid6: int64x2 gen() 3427 MB/s Nov 5 15:01:42.599219 kernel: raid6: int64x1 gen() 2765 MB/s Nov 5 15:01:42.599239 kernel: raid6: using algorithm neonx8 gen() 6489 MB/s Nov 5 15:01:42.599264 kernel: raid6: .... xor() 4754 MB/s, rmw enabled Nov 5 15:01:42.599283 kernel: raid6: using neon recovery algorithm Nov 5 15:01:42.599302 kernel: xor: measuring software checksum speed Nov 5 15:01:42.599321 kernel: 8regs : 13029 MB/sec Nov 5 15:01:42.599340 kernel: 32regs : 12542 MB/sec Nov 5 15:01:42.599359 kernel: arm64_neon : 8966 MB/sec Nov 5 15:01:42.599379 kernel: xor: using function: 8regs (13029 MB/sec) Nov 5 15:01:42.599401 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:01:42.599421 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (220) Nov 5 15:01:42.599440 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 15:01:42.599459 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:01:42.599478 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:01:42.599498 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:01:42.599517 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:01:42.599540 kernel: loop: module loaded Nov 5 15:01:42.599559 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 15:01:42.599578 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:01:42.599599 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:01:42.599626 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:01:42.599648 systemd[1]: Detected virtualization amazon. Nov 5 15:01:42.599673 systemd[1]: Detected architecture arm64. Nov 5 15:01:42.599693 systemd[1]: Running in initrd. Nov 5 15:01:42.599713 systemd[1]: No hostname configured, using default hostname. Nov 5 15:01:42.599734 systemd[1]: Hostname set to . Nov 5 15:01:42.599755 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:01:42.599776 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:01:42.599812 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:01:42.599838 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:01:42.599859 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:01:42.599883 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:01:42.599904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:01:42.599931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:01:42.599954 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:01:42.599976 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:01:42.600006 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:01:42.600028 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:01:42.600049 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:01:42.600075 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:01:42.600097 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:01:42.600119 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:01:42.600171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:01:42.600200 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:01:42.600223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:01:42.600245 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:01:42.600273 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:01:42.600295 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:01:42.600317 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:01:42.600339 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:01:42.600361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:01:42.600408 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:01:42.600432 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:01:42.600460 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:01:42.600482 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:01:42.600504 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:01:42.600530 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:01:42.600556 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:01:42.600578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:01:42.600601 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:01:42.600628 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:01:42.600650 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:01:42.600672 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:01:42.600749 systemd-journald[357]: Collecting audit messages is disabled. Nov 5 15:01:42.600801 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:01:42.600823 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:01:42.600845 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:01:42.600866 systemd-journald[357]: Journal started Nov 5 15:01:42.600903 systemd-journald[357]: Runtime Journal (/run/log/journal/ec29f3c61d73c3050d7a0a3912a6476c) is 8M, max 75.3M, 67.3M free. Nov 5 15:01:42.607051 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:01:42.613231 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:01:42.631242 kernel: Bridge firewalling registered Nov 5 15:01:42.629639 systemd-modules-load[360]: Inserted module 'br_netfilter' Nov 5 15:01:42.633248 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:01:42.643487 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:01:42.658326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:01:42.669323 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:01:42.679350 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:01:42.687516 systemd-tmpfiles[376]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:01:42.698400 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:01:42.712035 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:01:42.716781 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:01:42.750192 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:01:42.761353 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:01:42.879238 dracut-cmdline[401]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:01:42.885522 systemd-resolved[388]: Positive Trust Anchors: Nov 5 15:01:42.885541 systemd-resolved[388]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:01:42.885549 systemd-resolved[388]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:01:42.885608 systemd-resolved[388]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:01:43.142180 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:01:43.158167 kernel: random: crng init done Nov 5 15:01:43.158564 systemd-resolved[388]: Defaulting to hostname 'linux'. Nov 5 15:01:43.176383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:01:43.181539 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:01:43.199184 kernel: iscsi: registered transport (tcp) Nov 5 15:01:43.263195 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:01:43.263270 kernel: QLogic iSCSI HBA Driver Nov 5 15:01:43.305764 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:01:43.336351 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:01:43.339729 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:01:43.425584 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:01:43.430618 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:01:43.442538 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:01:43.501259 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:01:43.511285 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:01:43.576341 systemd-udevd[641]: Using default interface naming scheme 'v257'. Nov 5 15:01:43.598700 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:01:43.610417 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:01:43.650888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:01:43.661671 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:01:43.671370 dracut-pre-trigger[723]: rd.md=0: removing MD RAID activation Nov 5 15:01:43.727219 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:01:43.735327 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:01:43.775989 systemd-networkd[749]: lo: Link UP Nov 5 15:01:43.776705 systemd-networkd[749]: lo: Gained carrier Nov 5 15:01:43.780860 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:01:43.788630 systemd[1]: Reached target network.target - Network. Nov 5 15:01:43.897945 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:01:43.909487 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:01:44.138549 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:01:44.141408 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:01:44.152343 kernel: nvme nvme0: using unchecked data buffer Nov 5 15:01:44.144114 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:01:44.151734 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:01:44.171520 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 15:01:44.171598 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 5 15:01:44.186216 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 5 15:01:44.186635 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 5 15:01:44.193284 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e0:63:0a:09:79 Nov 5 15:01:44.196317 (udev-worker)[786]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:01:44.210057 systemd-networkd[749]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:01:44.210079 systemd-networkd[749]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:01:44.226041 systemd-networkd[749]: eth0: Link UP Nov 5 15:01:44.226431 systemd-networkd[749]: eth0: Gained carrier Nov 5 15:01:44.226453 systemd-networkd[749]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:01:44.226714 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:01:44.245294 systemd-networkd[749]: eth0: DHCPv4 address 172.31.21.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 15:01:44.308279 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 5 15:01:44.331909 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:01:44.367607 disk-uuid[860]: Primary Header is updated. Nov 5 15:01:44.367607 disk-uuid[860]: Secondary Entries is updated. Nov 5 15:01:44.367607 disk-uuid[860]: Secondary Header is updated. Nov 5 15:01:44.495808 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 5 15:01:44.548883 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 15:01:44.613462 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 5 15:01:44.890691 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:01:44.898057 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:01:44.904053 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:01:44.909724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:01:44.917453 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:01:44.951343 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:01:45.512408 disk-uuid[866]: Warning: The kernel is still using the old partition table. Nov 5 15:01:45.512408 disk-uuid[866]: The new table will be used at the next reboot or after you Nov 5 15:01:45.512408 disk-uuid[866]: run partprobe(8) or kpartx(8) Nov 5 15:01:45.512408 disk-uuid[866]: The operation has completed successfully. Nov 5 15:01:45.533697 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:01:45.535455 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:01:45.543337 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:01:45.609214 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1098) Nov 5 15:01:45.613051 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:01:45.613122 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:01:45.649451 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:01:45.649532 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:01:45.660245 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:01:45.661636 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:01:45.672624 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:01:46.043403 systemd-networkd[749]: eth0: Gained IPv6LL Nov 5 15:01:47.083796 ignition[1117]: Ignition 2.22.0 Nov 5 15:01:47.083837 ignition[1117]: Stage: fetch-offline Nov 5 15:01:47.087918 ignition[1117]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:47.087972 ignition[1117]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:47.093427 ignition[1117]: Ignition finished successfully Nov 5 15:01:47.098574 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:01:47.106331 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:01:47.161903 ignition[1127]: Ignition 2.22.0 Nov 5 15:01:47.161933 ignition[1127]: Stage: fetch Nov 5 15:01:47.166280 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:47.166328 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:47.166511 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:47.183838 ignition[1127]: PUT result: OK Nov 5 15:01:47.189242 ignition[1127]: parsed url from cmdline: "" Nov 5 15:01:47.189269 ignition[1127]: no config URL provided Nov 5 15:01:47.189288 ignition[1127]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:01:47.189328 ignition[1127]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:01:47.189390 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:47.191693 ignition[1127]: PUT result: OK Nov 5 15:01:47.191915 ignition[1127]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 5 15:01:47.197455 ignition[1127]: GET result: OK Nov 5 15:01:47.197694 ignition[1127]: parsing config with SHA512: da5c2c64684b4d70a70af244d79699c78ce12c2a66c6b81ef95a2a87752af27931408c81bb19caff7b95e14f20e24f407650dfb77ad9626469cd8a57fcdb8a7e Nov 5 15:01:47.218574 unknown[1127]: fetched base config from "system" Nov 5 15:01:47.218611 unknown[1127]: fetched base config from "system" Nov 5 15:01:47.221048 ignition[1127]: fetch: fetch complete Nov 5 15:01:47.218626 unknown[1127]: fetched user config from "aws" Nov 5 15:01:47.221064 ignition[1127]: fetch: fetch passed Nov 5 15:01:47.221218 ignition[1127]: Ignition finished successfully Nov 5 15:01:47.233066 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:01:47.238629 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:01:47.299326 ignition[1134]: Ignition 2.22.0 Nov 5 15:01:47.299360 ignition[1134]: Stage: kargs Nov 5 15:01:47.301179 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:47.301220 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:47.301411 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:47.305236 ignition[1134]: PUT result: OK Nov 5 15:01:47.323642 ignition[1134]: kargs: kargs passed Nov 5 15:01:47.326761 ignition[1134]: Ignition finished successfully Nov 5 15:01:47.335229 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:01:47.341477 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:01:47.405489 ignition[1141]: Ignition 2.22.0 Nov 5 15:01:47.405518 ignition[1141]: Stage: disks Nov 5 15:01:47.407295 ignition[1141]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:47.407317 ignition[1141]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:47.407546 ignition[1141]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:47.416556 ignition[1141]: PUT result: OK Nov 5 15:01:47.420887 ignition[1141]: disks: disks passed Nov 5 15:01:47.421194 ignition[1141]: Ignition finished successfully Nov 5 15:01:47.428226 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:01:47.431252 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:01:47.434326 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:01:47.439129 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:01:47.443500 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:01:47.448120 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:01:47.454074 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:01:47.579881 systemd-fsck[1150]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:01:47.584955 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:01:47.595203 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:01:47.835168 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 15:01:47.836556 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:01:47.841078 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:01:47.892302 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:01:47.894373 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:01:47.905998 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:01:47.910123 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:01:47.910893 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:01:47.925983 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:01:47.932270 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:01:47.950180 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1169) Nov 5 15:01:47.950243 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:01:47.952032 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:01:47.960684 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:01:47.960769 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:01:47.964026 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:01:48.899125 initrd-setup-root[1193]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:01:48.908906 initrd-setup-root[1200]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:01:48.922680 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:01:48.944643 initrd-setup-root[1214]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:01:49.579759 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:01:49.585382 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:01:49.594673 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:01:49.622297 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:01:49.628672 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:01:49.668231 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:01:49.685960 ignition[1283]: INFO : Ignition 2.22.0 Nov 5 15:01:49.688154 ignition[1283]: INFO : Stage: mount Nov 5 15:01:49.688154 ignition[1283]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:49.688154 ignition[1283]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:49.695408 ignition[1283]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:49.698778 ignition[1283]: INFO : PUT result: OK Nov 5 15:01:49.704072 ignition[1283]: INFO : mount: mount passed Nov 5 15:01:49.706225 ignition[1283]: INFO : Ignition finished successfully Nov 5 15:01:49.708970 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:01:49.715595 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:01:49.764132 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:01:49.801227 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1294) Nov 5 15:01:49.801295 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:01:49.806129 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:01:49.813510 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:01:49.813614 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:01:49.816999 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:01:49.874528 ignition[1311]: INFO : Ignition 2.22.0 Nov 5 15:01:49.874528 ignition[1311]: INFO : Stage: files Nov 5 15:01:49.879668 ignition[1311]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:49.879668 ignition[1311]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:49.879668 ignition[1311]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:49.887689 ignition[1311]: INFO : PUT result: OK Nov 5 15:01:49.893912 ignition[1311]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:01:49.897878 ignition[1311]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:01:49.901105 ignition[1311]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:01:49.940916 ignition[1311]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:01:49.946923 ignition[1311]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:01:49.950441 unknown[1311]: wrote ssh authorized keys file for user: core Nov 5 15:01:49.953008 ignition[1311]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:01:49.955946 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 15:01:49.955946 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 5 15:01:50.056319 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:01:50.201320 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 15:01:50.206274 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:01:50.206274 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:01:50.206274 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 15:01:50.219025 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 15:01:50.257439 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 15:01:50.257439 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 5 15:01:50.775539 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:01:52.085266 ignition[1311]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 15:01:52.090663 ignition[1311]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:01:52.179182 ignition[1311]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:01:52.183665 ignition[1311]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:01:52.188893 ignition[1311]: INFO : files: files passed Nov 5 15:01:52.188893 ignition[1311]: INFO : Ignition finished successfully Nov 5 15:01:52.212650 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:01:52.218386 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:01:52.229697 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:01:52.252616 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:01:52.252849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:01:52.278243 initrd-setup-root-after-ignition[1343]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:01:52.282128 initrd-setup-root-after-ignition[1347]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:01:52.286260 initrd-setup-root-after-ignition[1343]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:01:52.292854 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:01:52.302848 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:01:52.307080 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:01:52.409169 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:01:52.412004 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:01:52.418295 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:01:52.423094 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:01:52.426053 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:01:52.431743 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:01:52.474961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:01:52.481298 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:01:52.520008 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:01:52.520782 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:01:52.528940 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:01:52.531969 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:01:52.538824 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:01:52.539330 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:01:52.548600 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:01:52.551782 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:01:52.559721 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:01:52.564950 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:01:52.567776 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:01:52.575398 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:01:52.578309 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:01:52.585896 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:01:52.589849 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:01:52.597581 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:01:52.601048 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:01:52.605667 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:01:52.606368 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:01:52.614715 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:01:52.619825 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:01:52.625224 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:01:52.625632 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:01:52.633924 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:01:52.634230 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:01:52.642585 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:01:52.642883 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:01:52.646897 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:01:52.647255 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:01:52.659418 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:01:52.665249 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:01:52.669566 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:01:52.673508 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:01:52.682741 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:01:52.685577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:01:52.691593 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:01:52.691841 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:01:52.710994 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:01:52.711249 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:01:52.742836 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:01:52.759873 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:01:52.763269 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:01:52.780924 ignition[1367]: INFO : Ignition 2.22.0 Nov 5 15:01:52.780924 ignition[1367]: INFO : Stage: umount Nov 5 15:01:52.786289 ignition[1367]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:01:52.788677 ignition[1367]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:01:52.791465 ignition[1367]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:01:52.795314 ignition[1367]: INFO : PUT result: OK Nov 5 15:01:52.800549 ignition[1367]: INFO : umount: umount passed Nov 5 15:01:52.802499 ignition[1367]: INFO : Ignition finished successfully Nov 5 15:01:52.807411 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:01:52.807697 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:01:52.814989 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:01:52.815101 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:01:52.824871 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:01:52.824964 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:01:52.827654 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:01:52.827748 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:01:52.834507 systemd[1]: Stopped target network.target - Network. Nov 5 15:01:52.837198 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:01:52.837292 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:01:52.844230 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:01:52.850572 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:01:52.857940 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:01:52.861019 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:01:52.866326 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:01:52.870306 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:01:52.870533 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:01:52.876938 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:01:52.877013 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:01:52.879861 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:01:52.879969 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:01:52.886981 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:01:52.887075 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:01:52.890379 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:01:52.890465 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:01:52.895293 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:01:52.902337 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:01:52.910560 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:01:52.910773 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:01:52.928569 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:01:52.929419 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:01:52.938661 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:01:52.941578 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:01:52.941675 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:01:52.943664 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:01:52.944657 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:01:52.944764 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:01:52.947182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:01:52.947304 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:01:52.948131 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:01:52.948235 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:01:52.955536 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:01:52.986612 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:01:52.986913 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:01:52.995032 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:01:52.995501 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:01:53.001642 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:01:53.001717 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:01:53.004906 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:01:53.005003 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:01:53.010236 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:01:53.010336 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:01:53.017506 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:01:53.017596 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:01:53.037706 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:01:53.057671 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:01:53.057820 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:01:53.061504 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:01:53.061629 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:01:53.074546 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:01:53.074667 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:01:53.077778 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:01:53.077894 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:01:53.087050 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:01:53.087209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:01:53.119785 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:01:53.120439 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:01:53.127505 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:01:53.127830 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:01:53.136531 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:01:53.143080 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:01:53.174804 systemd[1]: Switching root. Nov 5 15:01:53.276213 systemd-journald[357]: Journal stopped Nov 5 15:01:57.595994 systemd-journald[357]: Received SIGTERM from PID 1 (systemd). Nov 5 15:01:57.602238 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:01:57.602322 kernel: SELinux: policy capability open_perms=1 Nov 5 15:01:57.602357 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:01:57.602390 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:01:57.602423 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:01:57.602457 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:01:57.602505 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:01:57.602543 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:01:57.602578 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:01:57.602611 kernel: audit: type=1403 audit(1762354914.419:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:01:57.602645 systemd[1]: Successfully loaded SELinux policy in 136.280ms. Nov 5 15:01:57.602693 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 17.025ms. Nov 5 15:01:57.602731 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:01:57.602765 systemd[1]: Detected virtualization amazon. Nov 5 15:01:57.602801 systemd[1]: Detected architecture arm64. Nov 5 15:01:57.602834 systemd[1]: Detected first boot. Nov 5 15:01:57.602869 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:01:57.602901 zram_generator::config[1410]: No configuration found. Nov 5 15:01:57.602936 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:01:57.602969 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:01:57.603002 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:01:57.603077 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:01:57.603118 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:01:57.603222 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:01:57.603261 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:01:57.603295 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:01:57.603330 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:01:57.603366 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:01:57.603408 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:01:57.603445 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:01:57.603478 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:01:57.603516 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:01:57.603549 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:01:57.603581 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:01:57.603611 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:01:57.603645 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:01:57.603681 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:01:57.603712 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:01:57.603744 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:01:57.603777 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:01:57.603811 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:01:57.603845 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:01:57.603886 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:01:57.603915 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:01:57.603946 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:01:57.603977 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:01:57.604007 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:01:57.604038 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:01:57.604073 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:01:57.604103 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:01:57.604134 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:01:57.608918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:01:57.608954 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:01:57.608994 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:01:57.609028 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:01:57.609067 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:01:57.609098 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:01:57.609129 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:01:57.609308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:01:57.609349 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:01:57.609380 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:01:57.609415 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:01:57.609454 systemd[1]: Reached target machines.target - Containers. Nov 5 15:01:57.609487 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:01:57.609518 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:01:57.609547 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:01:57.609582 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:01:57.609708 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:01:57.609757 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:01:57.609797 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:01:57.609827 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:01:57.609857 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:01:57.609892 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:01:57.609925 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:01:57.609957 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:01:57.609993 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:01:57.610026 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:01:57.610063 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:01:57.610097 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:01:57.610128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:01:57.610196 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:01:57.610233 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:01:57.610273 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:01:57.610307 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:01:57.610342 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:01:57.610380 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:01:57.610416 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:01:57.610447 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:01:57.610479 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:01:57.610509 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:01:57.610542 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:01:57.610573 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:01:57.610603 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:01:57.610642 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:01:57.610675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:01:57.610707 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:01:57.610737 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:01:57.610769 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:01:57.610799 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:01:57.610829 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:01:57.610867 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:01:57.610901 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:01:57.610931 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:01:57.610960 kernel: fuse: init (API version 7.41) Nov 5 15:01:57.610992 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:01:57.611028 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:01:57.611059 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:01:57.611089 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:01:57.611121 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:01:57.616082 kernel: ACPI: bus type drm_connector registered Nov 5 15:01:57.616267 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:01:57.616348 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:01:57.616397 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:01:57.616431 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:01:57.616465 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:01:57.616497 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:01:57.616532 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:01:57.616564 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:01:57.616604 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:01:57.616636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:01:57.616736 systemd-journald[1489]: Collecting audit messages is disabled. Nov 5 15:01:57.616797 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:01:57.616832 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:01:57.616874 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:01:57.616905 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:01:57.616935 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:01:57.616965 systemd-journald[1489]: Journal started Nov 5 15:01:57.617017 systemd-journald[1489]: Runtime Journal (/run/log/journal/ec29f3c61d73c3050d7a0a3912a6476c) is 8M, max 75.3M, 67.3M free. Nov 5 15:01:56.785236 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:01:57.621540 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:01:56.808966 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 15:01:56.809885 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:01:57.622590 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:01:57.680632 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:01:57.692563 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:01:57.700235 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:01:57.751038 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Nov 5 15:01:57.751079 systemd-tmpfiles[1525]: ACLs are not supported, ignoring. Nov 5 15:01:57.754954 systemd-journald[1489]: Time spent on flushing to /var/log/journal/ec29f3c61d73c3050d7a0a3912a6476c is 68.431ms for 917 entries. Nov 5 15:01:57.754954 systemd-journald[1489]: System Journal (/var/log/journal/ec29f3c61d73c3050d7a0a3912a6476c) is 8M, max 588.1M, 580.1M free. Nov 5 15:01:57.844379 systemd-journald[1489]: Received client request to flush runtime journal. Nov 5 15:01:57.844455 kernel: loop1: detected capacity change from 0 to 100624 Nov 5 15:01:57.773534 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:01:57.781473 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:01:57.790583 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:01:57.812417 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:01:57.817918 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:01:57.821747 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:01:57.848157 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:01:57.861945 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:01:57.894269 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:01:58.000714 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:01:58.008376 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:01:58.016503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:01:58.060188 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Nov 5 15:01:58.060228 systemd-tmpfiles[1565]: ACLs are not supported, ignoring. Nov 5 15:01:58.070554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:01:58.087015 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:01:58.169210 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:01:58.192186 kernel: loop2: detected capacity change from 0 to 119344 Nov 5 15:01:58.324757 systemd-resolved[1564]: Positive Trust Anchors: Nov 5 15:01:58.324796 systemd-resolved[1564]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:01:58.324810 systemd-resolved[1564]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:01:58.324872 systemd-resolved[1564]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:01:58.338501 systemd-resolved[1564]: Defaulting to hostname 'linux'. Nov 5 15:01:58.340983 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:01:58.344014 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:01:58.478191 kernel: loop3: detected capacity change from 0 to 207008 Nov 5 15:01:58.668264 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:01:58.678001 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:01:58.738451 systemd-udevd[1578]: Using default interface naming scheme 'v257'. Nov 5 15:01:58.755059 kernel: loop4: detected capacity change from 0 to 61264 Nov 5 15:01:58.869458 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:01:58.878428 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:01:59.020990 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:01:59.039840 (udev-worker)[1590]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:01:59.092206 kernel: loop5: detected capacity change from 0 to 100624 Nov 5 15:01:59.107392 systemd-networkd[1583]: lo: Link UP Nov 5 15:01:59.107413 systemd-networkd[1583]: lo: Gained carrier Nov 5 15:01:59.110751 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:01:59.116504 systemd[1]: Reached target network.target - Network. Nov 5 15:01:59.122183 kernel: loop6: detected capacity change from 0 to 119344 Nov 5 15:01:59.127343 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:01:59.137090 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:01:59.144911 systemd-networkd[1583]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:01:59.144937 systemd-networkd[1583]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:01:59.153302 kernel: loop7: detected capacity change from 0 to 207008 Nov 5 15:01:59.156036 systemd-networkd[1583]: eth0: Link UP Nov 5 15:01:59.156561 systemd-networkd[1583]: eth0: Gained carrier Nov 5 15:01:59.156603 systemd-networkd[1583]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:01:59.170297 systemd-networkd[1583]: eth0: DHCPv4 address 172.31.21.83/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 15:01:59.191447 kernel: loop1: detected capacity change from 0 to 61264 Nov 5 15:01:59.207063 (sd-merge)[1611]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Nov 5 15:01:59.218899 (sd-merge)[1611]: Merged extensions into '/usr'. Nov 5 15:01:59.229453 systemd[1]: Reload requested from client PID 1524 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:01:59.229486 systemd[1]: Reloading... Nov 5 15:01:59.519304 zram_generator::config[1670]: No configuration found. Nov 5 15:02:00.234332 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 15:02:00.239249 systemd[1]: Reloading finished in 1008 ms. Nov 5 15:02:00.269248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:02:00.273053 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:02:00.363887 systemd[1]: Starting ensure-sysext.service... Nov 5 15:02:00.371489 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:02:00.387820 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:02:00.393619 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:02:00.445503 systemd[1]: Reload requested from client PID 1797 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:02:00.445536 systemd[1]: Reloading... Nov 5 15:02:00.501744 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:02:00.501834 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:02:00.502537 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:02:00.503050 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:02:00.507124 systemd-tmpfiles[1799]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:02:00.507979 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Nov 5 15:02:00.508341 systemd-tmpfiles[1799]: ACLs are not supported, ignoring. Nov 5 15:02:00.525098 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:02:00.525357 systemd-tmpfiles[1799]: Skipping /boot Nov 5 15:02:00.560827 systemd-tmpfiles[1799]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:02:00.560851 systemd-tmpfiles[1799]: Skipping /boot Nov 5 15:02:00.629230 zram_generator::config[1837]: No configuration found. Nov 5 15:02:00.699364 systemd-networkd[1583]: eth0: Gained IPv6LL Nov 5 15:02:01.114459 systemd[1]: Reloading finished in 668 ms. Nov 5 15:02:01.143342 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:02:01.176594 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:02:01.180756 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:02:01.187737 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:02:01.209364 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:02:01.215308 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:02:01.226642 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:02:01.235345 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:02:01.247733 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:02:01.255758 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:02:01.273587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:02:01.277643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:02:01.284827 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:02:01.298826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:02:01.301478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:02:01.301780 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:02:01.312017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:02:01.312610 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:02:01.312908 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:02:01.328435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:02:01.335816 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:02:01.340764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:02:01.341096 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:02:01.341512 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:02:01.356589 systemd[1]: Finished ensure-sysext.service. Nov 5 15:02:01.374916 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:02:01.375489 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:02:01.387455 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:02:01.388680 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:02:01.400113 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:02:01.400714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:02:01.406674 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:02:01.411102 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:02:01.420849 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:02:01.422298 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:02:01.429855 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:02:01.438004 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:02:01.495276 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:02:01.503296 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:02:01.527080 augenrules[1931]: No rules Nov 5 15:02:01.529576 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:02:01.531354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:02:04.413187 ldconfig[1897]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:02:04.419291 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:02:04.425362 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:02:04.457403 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:02:04.461462 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:02:04.464092 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:02:04.466952 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:02:04.470064 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:02:04.472730 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:02:04.475656 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:02:04.478494 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:02:04.478542 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:02:04.480577 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:02:04.484125 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:02:04.489793 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:02:04.496736 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:02:04.500247 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:02:04.503303 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:02:04.526376 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:02:04.529305 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:02:04.533065 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:02:04.535769 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:02:04.538081 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:02:04.540324 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:02:04.540383 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:02:04.542232 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:02:04.547074 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:02:04.554585 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:02:04.561599 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:02:04.573383 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:02:04.579559 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:02:04.583118 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:02:04.588578 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:04.597700 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:02:04.607684 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 15:02:04.615514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:02:04.629448 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:02:04.637454 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 5 15:02:04.646435 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:02:04.665019 jq[1946]: false Nov 5 15:02:04.657684 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:02:04.668604 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:02:04.671321 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:02:04.675709 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:02:04.682551 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:02:04.715441 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:02:04.722117 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:02:04.726906 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:02:04.727432 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:02:04.792909 extend-filesystems[1947]: Found /dev/nvme0n1p6 Nov 5 15:02:04.814484 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:02:04.819173 extend-filesystems[1947]: Found /dev/nvme0n1p9 Nov 5 15:02:04.823797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:02:04.831603 (ntainerd)[1976]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:02:04.858347 extend-filesystems[1947]: Checking size of /dev/nvme0n1p9 Nov 5 15:02:04.873635 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:02:04.878079 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:02:04.891354 jq[1961]: true Nov 5 15:02:04.904544 tar[1975]: linux-arm64/LICENSE Nov 5 15:02:04.904544 tar[1975]: linux-arm64/helm Nov 5 15:02:04.942928 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:02:04.979864 jq[2006]: true Nov 5 15:02:04.986403 dbus-daemon[1944]: [system] SELinux support is enabled Nov 5 15:02:04.993172 extend-filesystems[1947]: Resized partition /dev/nvme0n1p9 Nov 5 15:02:05.003744 dbus-daemon[1944]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1583 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 15:02:05.004678 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:02:05.013513 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:02:05.013563 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:02:05.016545 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:02:05.016580 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:02:05.023276 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:54 UTC 2025 (1): Starting Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:54 UTC 2025 (1): Starting Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: ---------------------------------------------------- Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: corporation. Support and training for ntp-4 are Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: available at https://www.nwtime.org/support Nov 5 15:02:05.027171 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: ---------------------------------------------------- Nov 5 15:02:05.025866 ntpd[1950]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 15:02:05.025888 ntpd[1950]: ---------------------------------------------------- Nov 5 15:02:05.025905 ntpd[1950]: ntp-4 is maintained by Network Time Foundation, Nov 5 15:02:05.025921 ntpd[1950]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 15:02:05.025937 ntpd[1950]: corporation. Support and training for ntp-4 are Nov 5 15:02:05.025954 ntpd[1950]: available at https://www.nwtime.org/support Nov 5 15:02:05.025970 ntpd[1950]: ---------------------------------------------------- Nov 5 15:02:05.041833 extend-filesystems[2016]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:02:05.045846 update_engine[1960]: I20251105 15:02:05.039515 1960 main.cc:92] Flatcar Update Engine starting Nov 5 15:02:05.046252 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: proto: precision = 0.096 usec (-23) Nov 5 15:02:05.046252 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: basedate set to 2025-10-24 Nov 5 15:02:05.046252 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: gps base set to 2025-10-26 (week 2390) Nov 5 15:02:05.032921 ntpd[1950]: proto: precision = 0.096 usec (-23) Nov 5 15:02:05.033414 ntpd[1950]: basedate set to 2025-10-24 Nov 5 15:02:05.033438 ntpd[1950]: gps base set to 2025-10-26 (week 2390) Nov 5 15:02:05.040624 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.systemd1' Nov 5 15:02:05.061362 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 15:02:05.061436 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 15:02:05.061620 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 15:02:05.061620 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 15:02:05.061805 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 15:02:05.061992 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 15:02:05.061992 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen normally on 3 eth0 172.31.21.83:123 Nov 5 15:02:05.061992 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen normally on 4 lo [::1]:123 Nov 5 15:02:05.061992 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listen normally on 5 eth0 [fe80::4e0:63ff:fe0a:979%2]:123 Nov 5 15:02:05.061867 ntpd[1950]: Listen normally on 3 eth0 172.31.21.83:123 Nov 5 15:02:05.062296 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: Listening on routing socket on fd #22 for interface updates Nov 5 15:02:05.061914 ntpd[1950]: Listen normally on 4 lo [::1]:123 Nov 5 15:02:05.061958 ntpd[1950]: Listen normally on 5 eth0 [fe80::4e0:63ff:fe0a:979%2]:123 Nov 5 15:02:05.061999 ntpd[1950]: Listening on routing socket on fd #22 for interface updates Nov 5 15:02:05.064718 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 15:02:05.068016 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 5 15:02:05.076174 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Nov 5 15:02:05.079746 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 5 15:02:05.085990 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:02:05.094668 update_engine[1960]: I20251105 15:02:05.091456 1960 update_check_scheduler.cc:74] Next update check in 11m19s Nov 5 15:02:05.114802 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Nov 5 15:02:05.114844 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:02:05.146489 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:02:05.146489 ntpd[1950]: 5 Nov 15:02:05 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:02:05.114891 ntpd[1950]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:02:05.152893 extend-filesystems[2016]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 15:02:05.152893 extend-filesystems[2016]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 5 15:02:05.152893 extend-filesystems[2016]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Nov 5 15:02:05.171607 extend-filesystems[1947]: Resized filesystem in /dev/nvme0n1p9 Nov 5 15:02:05.188723 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:02:05.192721 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:02:05.194268 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:02:05.250206 coreos-metadata[1943]: Nov 05 15:02:05.249 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 15:02:05.250206 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 5 15:02:05.250206 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetch successful Nov 5 15:02:05.250206 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 5 15:02:05.250206 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetch successful Nov 5 15:02:05.250842 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 5 15:02:05.250842 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetch successful Nov 5 15:02:05.250842 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 5 15:02:05.250842 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetch successful Nov 5 15:02:05.250842 coreos-metadata[1943]: Nov 05 15:02:05.250 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetch failed with 404: resource not found Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetch successful Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetch successful Nov 5 15:02:05.254281 coreos-metadata[1943]: Nov 05 15:02:05.254 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 5 15:02:05.257674 coreos-metadata[1943]: Nov 05 15:02:05.257 INFO Fetch successful Nov 5 15:02:05.257674 coreos-metadata[1943]: Nov 05 15:02:05.257 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 5 15:02:05.262127 coreos-metadata[1943]: Nov 05 15:02:05.262 INFO Fetch successful Nov 5 15:02:05.262302 coreos-metadata[1943]: Nov 05 15:02:05.262 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 5 15:02:05.262302 coreos-metadata[1943]: Nov 05 15:02:05.262 INFO Fetch successful Nov 5 15:02:05.408289 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:02:05.411254 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:02:05.425526 systemd-logind[1959]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 15:02:05.425591 systemd-logind[1959]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 5 15:02:05.426905 systemd-logind[1959]: New seat seat0. Nov 5 15:02:05.431568 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:02:05.590236 bash[2042]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:02:05.590438 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:02:05.605810 systemd[1]: Starting sshkeys.service... Nov 5 15:02:05.666258 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 15:02:05.680892 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 15:02:05.689067 dbus-daemon[1944]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2018 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 15:02:05.734168 amazon-ssm-agent[2021]: Initializing new seelog logger Nov 5 15:02:05.734168 amazon-ssm-agent[2021]: New Seelog Logger Creation Complete Nov 5 15:02:05.734168 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.734168 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.734168 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 processing appconfig overrides Nov 5 15:02:05.725239 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 processing appconfig overrides Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.761165 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 processing appconfig overrides Nov 5 15:02:05.773165 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7454 INFO Proxy environment variables: Nov 5 15:02:05.779721 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:02:05.789688 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.789688 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:05.789688 amazon-ssm-agent[2021]: 2025/11/05 15:02:05 processing appconfig overrides Nov 5 15:02:05.791600 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:02:05.870805 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7458 INFO https_proxy: Nov 5 15:02:05.971227 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7458 INFO http_proxy: Nov 5 15:02:06.044337 containerd[1976]: time="2025-11-05T15:02:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:02:06.044337 containerd[1976]: time="2025-11-05T15:02:06.043896512Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:02:06.071539 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7458 INFO no_proxy: Nov 5 15:02:06.079729 containerd[1976]: time="2025-11-05T15:02:06.079663112Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t=1.51842ms Nov 5 15:02:06.079896 containerd[1976]: time="2025-11-05T15:02:06.079863404Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:02:06.080036 containerd[1976]: time="2025-11-05T15:02:06.080006516Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:02:06.080471 containerd[1976]: time="2025-11-05T15:02:06.080432036Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:02:06.080633 containerd[1976]: time="2025-11-05T15:02:06.080604500Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:02:06.080768 containerd[1976]: time="2025-11-05T15:02:06.080740736Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:02:06.080989 containerd[1976]: time="2025-11-05T15:02:06.080955896Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:02:06.081105 containerd[1976]: time="2025-11-05T15:02:06.081076208Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:02:06.081636 containerd[1976]: time="2025-11-05T15:02:06.081567332Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:02:06.081805 containerd[1976]: time="2025-11-05T15:02:06.081771549Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:02:06.081921 containerd[1976]: time="2025-11-05T15:02:06.081892185Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:02:06.082018 containerd[1976]: time="2025-11-05T15:02:06.081992361Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:02:06.082389 containerd[1976]: time="2025-11-05T15:02:06.082351245Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:02:06.082915 containerd[1976]: time="2025-11-05T15:02:06.082875753Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:02:06.083071 containerd[1976]: time="2025-11-05T15:02:06.083041401Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:02:06.083226 containerd[1976]: time="2025-11-05T15:02:06.083197593Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:02:06.083396 containerd[1976]: time="2025-11-05T15:02:06.083367465Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:02:06.083923 containerd[1976]: time="2025-11-05T15:02:06.083890797Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:02:06.084187 containerd[1976]: time="2025-11-05T15:02:06.084122733Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.096845505Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.096971829Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097008237Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097042401Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097071705Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097103433Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097134705Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097291209Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097322685Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097349661Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097373709Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097403589Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097637685Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:02:06.098545 containerd[1976]: time="2025-11-05T15:02:06.097681377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097722825Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097749441Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097777593Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097804377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097830909Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097856817Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097883577Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097909125Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.097935117Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.098344461Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:02:06.099350 containerd[1976]: time="2025-11-05T15:02:06.098381877Z" level=info msg="Start snapshots syncer" Nov 5 15:02:06.100206 containerd[1976]: time="2025-11-05T15:02:06.099937773Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:02:06.102023 containerd[1976]: time="2025-11-05T15:02:06.100999209Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:02:06.102023 containerd[1976]: time="2025-11-05T15:02:06.101213589Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:02:06.102923 containerd[1976]: time="2025-11-05T15:02:06.102829053Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:02:06.116547 containerd[1976]: time="2025-11-05T15:02:06.116477133Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:02:06.116688 containerd[1976]: time="2025-11-05T15:02:06.116581017Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:02:06.116688 containerd[1976]: time="2025-11-05T15:02:06.116615685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:02:06.116782 containerd[1976]: time="2025-11-05T15:02:06.116684661Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:02:06.116782 containerd[1976]: time="2025-11-05T15:02:06.116741253Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:02:06.116782 containerd[1976]: time="2025-11-05T15:02:06.116772765Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:02:06.116904 containerd[1976]: time="2025-11-05T15:02:06.116801145Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:02:06.116955 containerd[1976]: time="2025-11-05T15:02:06.116931981Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:02:06.117012 containerd[1976]: time="2025-11-05T15:02:06.116964045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:02:06.118437 containerd[1976]: time="2025-11-05T15:02:06.118369725Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118509981Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118545969Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118569177Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118593825Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118614777Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118639089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:02:06.118678 containerd[1976]: time="2025-11-05T15:02:06.118667121Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:02:06.118981 containerd[1976]: time="2025-11-05T15:02:06.118835037Z" level=info msg="runtime interface created" Nov 5 15:02:06.118981 containerd[1976]: time="2025-11-05T15:02:06.118853229Z" level=info msg="created NRI interface" Nov 5 15:02:06.118981 containerd[1976]: time="2025-11-05T15:02:06.118874853Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:02:06.118981 containerd[1976]: time="2025-11-05T15:02:06.118904877Z" level=info msg="Connect containerd service" Nov 5 15:02:06.118981 containerd[1976]: time="2025-11-05T15:02:06.118965561Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:02:06.138440 containerd[1976]: time="2025-11-05T15:02:06.137665653Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:02:06.181770 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7460 INFO Checking if agent identity type OnPrem can be assumed Nov 5 15:02:06.307192 amazon-ssm-agent[2021]: 2025-11-05 15:02:05.7461 INFO Checking if agent identity type EC2 can be assumed Nov 5 15:02:06.363945 coreos-metadata[2110]: Nov 05 15:02:06.363 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 15:02:06.377181 coreos-metadata[2110]: Nov 05 15:02:06.374 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 5 15:02:06.380673 coreos-metadata[2110]: Nov 05 15:02:06.378 INFO Fetch successful Nov 5 15:02:06.380673 coreos-metadata[2110]: Nov 05 15:02:06.378 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 5 15:02:06.386192 coreos-metadata[2110]: Nov 05 15:02:06.383 INFO Fetch successful Nov 5 15:02:06.393844 unknown[2110]: wrote ssh authorized keys file for user: core Nov 5 15:02:06.408329 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2012 INFO Agent will take identity from EC2 Nov 5 15:02:06.414813 locksmithd[2022]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:02:06.462585 polkitd[2103]: Started polkitd version 126 Nov 5 15:02:06.503604 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2095 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 5 15:02:06.511013 update-ssh-keys[2181]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:02:06.515822 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:02:06.535650 systemd[1]: Finished sshkeys.service. Nov 5 15:02:06.554824 sshd_keygen[1993]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:02:06.566050 polkitd[2103]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 15:02:06.568790 polkitd[2103]: Loading rules from directory /run/polkit-1/rules.d Nov 5 15:02:06.568900 polkitd[2103]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:02:06.569595 polkitd[2103]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 15:02:06.569669 polkitd[2103]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:02:06.569757 polkitd[2103]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 15:02:06.576098 polkitd[2103]: Finished loading, compiling and executing 2 rules Nov 5 15:02:06.580437 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 15:02:06.589987 dbus-daemon[1944]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 15:02:06.593266 polkitd[2103]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 15:02:06.625216 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2095 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 5 15:02:06.712288 systemd-hostnamed[2018]: Hostname set to (transient) Nov 5 15:02:06.712477 systemd-resolved[1564]: System hostname changed to 'ip-172-31-21-83'. Nov 5 15:02:06.720570 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2095 INFO [amazon-ssm-agent] Starting Core Agent Nov 5 15:02:06.745184 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:02:06.756888 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:02:06.815054 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:02:06.815608 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:02:06.821168 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2095 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 5 15:02:06.831437 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877306944Z" level=info msg="Start subscribing containerd event" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877419912Z" level=info msg="Start recovering state" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877575996Z" level=info msg="Start event monitor" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877603548Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877624932Z" level=info msg="Start streaming server" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877645572Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877665264Z" level=info msg="runtime interface starting up..." Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877679784Z" level=info msg="starting plugins..." Nov 5 15:02:06.878484 containerd[1976]: time="2025-11-05T15:02:06.877712316Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:02:06.878971 containerd[1976]: time="2025-11-05T15:02:06.878512536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:02:06.878971 containerd[1976]: time="2025-11-05T15:02:06.878736972Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:02:06.879090 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:02:06.883738 containerd[1976]: time="2025-11-05T15:02:06.883673640Z" level=info msg="containerd successfully booted in 0.842937s" Nov 5 15:02:06.902320 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:02:06.916744 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:02:06.921002 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2095 INFO [Registrar] Starting registrar module Nov 5 15:02:06.927081 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:02:06.930755 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:02:07.021243 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2363 INFO [EC2Identity] Checking disk for registration info Nov 5 15:02:07.120965 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2364 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 5 15:02:07.177219 tar[1975]: linux-arm64/README.md Nov 5 15:02:07.202645 amazon-ssm-agent[2021]: 2025/11/05 15:02:07 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:07.202645 amazon-ssm-agent[2021]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:02:07.202849 amazon-ssm-agent[2021]: 2025/11/05 15:02:07 processing appconfig overrides Nov 5 15:02:07.210092 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:02:07.221525 amazon-ssm-agent[2021]: 2025-11-05 15:02:06.2364 INFO [EC2Identity] Generating registration keypair Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.1510 INFO [EC2Identity] Checking write access before registering Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.1519 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2009 INFO [EC2Identity] EC2 registration was successful. Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2009 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2011 INFO [CredentialRefresher] credentialRefresher has started Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2011 INFO [CredentialRefresher] Starting credentials refresher loop Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2444 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 5 15:02:07.244967 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2447 INFO [CredentialRefresher] Credentials ready Nov 5 15:02:07.321386 amazon-ssm-agent[2021]: 2025-11-05 15:02:07.2450 INFO [CredentialRefresher] Next credential rotation will be in 29.999990961800002 minutes Nov 5 15:02:08.273256 amazon-ssm-agent[2021]: 2025-11-05 15:02:08.2729 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 5 15:02:08.373631 amazon-ssm-agent[2021]: 2025-11-05 15:02:08.2766 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2230) started Nov 5 15:02:08.474733 amazon-ssm-agent[2021]: 2025-11-05 15:02:08.2766 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 5 15:02:09.577398 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:02:09.582580 systemd[1]: Started sshd@0-172.31.21.83:22-139.178.89.65:57580.service - OpenSSH per-connection server daemon (139.178.89.65:57580). Nov 5 15:02:09.773452 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:09.780278 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:02:09.788535 systemd[1]: Startup finished in 4.071s (kernel) + 12.904s (initrd) + 15.504s (userspace) = 32.480s. Nov 5 15:02:09.795858 (kubelet)[2251]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:02:09.947982 sshd[2243]: Accepted publickey for core from 139.178.89.65 port 57580 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:09.952665 sshd-session[2243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:09.966211 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:02:09.968443 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:02:09.984853 systemd-logind[1959]: New session 1 of user core. Nov 5 15:02:10.012793 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:02:10.021030 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:02:10.042897 (systemd)[2256]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:02:10.053413 systemd-logind[1959]: New session c1 of user core. Nov 5 15:02:10.376806 systemd[2256]: Queued start job for default target default.target. Nov 5 15:02:10.390279 systemd[2256]: Created slice app.slice - User Application Slice. Nov 5 15:02:10.390517 systemd[2256]: Reached target paths.target - Paths. Nov 5 15:02:10.390722 systemd[2256]: Reached target timers.target - Timers. Nov 5 15:02:10.393452 systemd[2256]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:02:10.433448 systemd[2256]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:02:10.433674 systemd[2256]: Reached target sockets.target - Sockets. Nov 5 15:02:10.433760 systemd[2256]: Reached target basic.target - Basic System. Nov 5 15:02:10.433843 systemd[2256]: Reached target default.target - Main User Target. Nov 5 15:02:10.433902 systemd[2256]: Startup finished in 361ms. Nov 5 15:02:10.435099 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:02:10.447442 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:02:10.600232 systemd[1]: Started sshd@1-172.31.21.83:22-139.178.89.65:57584.service - OpenSSH per-connection server daemon (139.178.89.65:57584). Nov 5 15:02:10.876422 sshd[2273]: Accepted publickey for core from 139.178.89.65 port 57584 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:10.879480 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:10.890043 systemd-logind[1959]: New session 2 of user core. Nov 5 15:02:10.903411 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:02:11.033753 sshd[2276]: Connection closed by 139.178.89.65 port 57584 Nov 5 15:02:11.034444 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Nov 5 15:02:11.043351 systemd-logind[1959]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:02:11.044347 systemd[1]: sshd@1-172.31.21.83:22-139.178.89.65:57584.service: Deactivated successfully. Nov 5 15:02:11.048798 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:02:11.051971 systemd-logind[1959]: Removed session 2. Nov 5 15:02:11.066612 systemd[1]: Started sshd@2-172.31.21.83:22-139.178.89.65:57596.service - OpenSSH per-connection server daemon (139.178.89.65:57596). Nov 5 15:02:11.273053 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 57596 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:11.275915 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:11.288284 systemd-logind[1959]: New session 3 of user core. Nov 5 15:02:11.291458 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:02:11.410417 sshd[2285]: Connection closed by 139.178.89.65 port 57596 Nov 5 15:02:11.410869 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Nov 5 15:02:11.421976 systemd[1]: sshd@2-172.31.21.83:22-139.178.89.65:57596.service: Deactivated successfully. Nov 5 15:02:11.426842 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:02:11.429543 systemd-logind[1959]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:02:11.447598 systemd[1]: Started sshd@3-172.31.21.83:22-139.178.89.65:57610.service - OpenSSH per-connection server daemon (139.178.89.65:57610). Nov 5 15:02:11.448287 systemd-logind[1959]: Removed session 3. Nov 5 15:02:11.644223 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 57610 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:11.646306 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:11.654874 systemd-logind[1959]: New session 4 of user core. Nov 5 15:02:11.667440 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:02:11.785487 kubelet[2251]: E1105 15:02:11.785398 2251 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:02:11.790638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:02:11.791127 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:02:11.793333 systemd[1]: kubelet.service: Consumed 1.443s CPU time, 256.3M memory peak. Nov 5 15:02:11.803203 sshd[2294]: Connection closed by 139.178.89.65 port 57610 Nov 5 15:02:11.804030 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Nov 5 15:02:11.811731 systemd-logind[1959]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:02:11.812875 systemd[1]: sshd@3-172.31.21.83:22-139.178.89.65:57610.service: Deactivated successfully. Nov 5 15:02:11.815921 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:02:11.818967 systemd-logind[1959]: Removed session 4. Nov 5 15:02:11.837760 systemd[1]: Started sshd@4-172.31.21.83:22-139.178.89.65:57618.service - OpenSSH per-connection server daemon (139.178.89.65:57618). Nov 5 15:02:11.590048 systemd-resolved[1564]: Clock change detected. Flushing caches. Nov 5 15:02:11.600077 systemd-journald[1489]: Time jumped backwards, rotating. Nov 5 15:02:11.600203 sshd[2302]: Accepted publickey for core from 139.178.89.65 port 57618 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:11.602502 sshd-session[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:11.610832 systemd-logind[1959]: New session 5 of user core. Nov 5 15:02:11.620443 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:02:11.809731 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:02:11.810360 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:02:11.826070 sudo[2307]: pam_unix(sudo:session): session closed for user root Nov 5 15:02:11.851310 sshd[2306]: Connection closed by 139.178.89.65 port 57618 Nov 5 15:02:11.850925 sshd-session[2302]: pam_unix(sshd:session): session closed for user core Nov 5 15:02:11.860404 systemd[1]: sshd@4-172.31.21.83:22-139.178.89.65:57618.service: Deactivated successfully. Nov 5 15:02:11.864066 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:02:11.865616 systemd-logind[1959]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:02:11.868592 systemd-logind[1959]: Removed session 5. Nov 5 15:02:11.886743 systemd[1]: Started sshd@5-172.31.21.83:22-139.178.89.65:57622.service - OpenSSH per-connection server daemon (139.178.89.65:57622). Nov 5 15:02:12.090315 sshd[2313]: Accepted publickey for core from 139.178.89.65 port 57622 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:12.093085 sshd-session[2313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:12.102261 systemd-logind[1959]: New session 6 of user core. Nov 5 15:02:12.113407 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:02:12.218211 sudo[2318]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:02:12.219286 sudo[2318]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:02:12.228192 sudo[2318]: pam_unix(sudo:session): session closed for user root Nov 5 15:02:12.239913 sudo[2317]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:02:12.240583 sudo[2317]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:02:12.259124 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:02:12.316972 augenrules[2340]: No rules Nov 5 15:02:12.319289 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:02:12.319726 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:02:12.322819 sudo[2317]: pam_unix(sudo:session): session closed for user root Nov 5 15:02:12.346703 sshd[2316]: Connection closed by 139.178.89.65 port 57622 Nov 5 15:02:12.347551 sshd-session[2313]: pam_unix(sshd:session): session closed for user core Nov 5 15:02:12.355876 systemd[1]: sshd@5-172.31.21.83:22-139.178.89.65:57622.service: Deactivated successfully. Nov 5 15:02:12.360500 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:02:12.363290 systemd-logind[1959]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:02:12.365396 systemd-logind[1959]: Removed session 6. Nov 5 15:02:12.381081 systemd[1]: Started sshd@6-172.31.21.83:22-139.178.89.65:57638.service - OpenSSH per-connection server daemon (139.178.89.65:57638). Nov 5 15:02:12.573264 sshd[2349]: Accepted publickey for core from 139.178.89.65 port 57638 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:02:12.575553 sshd-session[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:02:12.584574 systemd-logind[1959]: New session 7 of user core. Nov 5 15:02:12.597446 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:02:12.703363 sudo[2353]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:02:12.703987 sudo[2353]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:02:14.098248 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:02:14.121741 (dockerd)[2370]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:02:15.235192 dockerd[2370]: time="2025-11-05T15:02:15.233376714Z" level=info msg="Starting up" Nov 5 15:02:15.237342 dockerd[2370]: time="2025-11-05T15:02:15.237282834Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:02:15.259596 dockerd[2370]: time="2025-11-05T15:02:15.259540926Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:02:15.300963 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport352919070-merged.mount: Deactivated successfully. Nov 5 15:02:15.332361 dockerd[2370]: time="2025-11-05T15:02:15.332302746Z" level=info msg="Loading containers: start." Nov 5 15:02:15.350628 kernel: Initializing XFRM netlink socket Nov 5 15:02:15.942359 (udev-worker)[2392]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:02:16.057265 systemd-networkd[1583]: docker0: Link UP Nov 5 15:02:16.068744 dockerd[2370]: time="2025-11-05T15:02:16.068667822Z" level=info msg="Loading containers: done." Nov 5 15:02:16.101355 dockerd[2370]: time="2025-11-05T15:02:16.101293338Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:02:16.101552 dockerd[2370]: time="2025-11-05T15:02:16.101410782Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:02:16.101606 dockerd[2370]: time="2025-11-05T15:02:16.101556210Z" level=info msg="Initializing buildkit" Nov 5 15:02:16.153798 dockerd[2370]: time="2025-11-05T15:02:16.153742494Z" level=info msg="Completed buildkit initialization" Nov 5 15:02:16.168370 dockerd[2370]: time="2025-11-05T15:02:16.168281454Z" level=info msg="Daemon has completed initialization" Nov 5 15:02:16.168650 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:02:16.170213 dockerd[2370]: time="2025-11-05T15:02:16.168543774Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:02:16.292239 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2861407065-merged.mount: Deactivated successfully. Nov 5 15:02:18.180365 containerd[1976]: time="2025-11-05T15:02:18.179604320Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 15:02:19.044020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3931222056.mount: Deactivated successfully. Nov 5 15:02:20.851015 containerd[1976]: time="2025-11-05T15:02:20.850951922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:20.853176 containerd[1976]: time="2025-11-05T15:02:20.852736106Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363685" Nov 5 15:02:20.853556 containerd[1976]: time="2025-11-05T15:02:20.853510010Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:20.860670 containerd[1976]: time="2025-11-05T15:02:20.860611190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:20.862690 containerd[1976]: time="2025-11-05T15:02:20.862610462Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 2.682941654s" Nov 5 15:02:20.862690 containerd[1976]: time="2025-11-05T15:02:20.862683878Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 5 15:02:20.863635 containerd[1976]: time="2025-11-05T15:02:20.863594774Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 15:02:21.383811 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:02:21.387400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:21.742234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:21.756985 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:02:21.846116 kubelet[2647]: E1105 15:02:21.846023 2647 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:02:21.853775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:02:21.854120 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:02:21.854835 systemd[1]: kubelet.service: Consumed 345ms CPU time, 109.6M memory peak. Nov 5 15:02:22.766195 containerd[1976]: time="2025-11-05T15:02:22.765451791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:22.766851 containerd[1976]: time="2025-11-05T15:02:22.766762287Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531200" Nov 5 15:02:22.768145 containerd[1976]: time="2025-11-05T15:02:22.768089619Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:22.774614 containerd[1976]: time="2025-11-05T15:02:22.774533235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:22.776878 containerd[1976]: time="2025-11-05T15:02:22.776812599Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.911883329s" Nov 5 15:02:22.777488 containerd[1976]: time="2025-11-05T15:02:22.777052767Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 5 15:02:22.778001 containerd[1976]: time="2025-11-05T15:02:22.777799335Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 15:02:24.318173 containerd[1976]: time="2025-11-05T15:02:24.318030351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:24.321475 containerd[1976]: time="2025-11-05T15:02:24.321317115Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484324" Nov 5 15:02:24.321475 containerd[1976]: time="2025-11-05T15:02:24.321360843Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:24.328179 containerd[1976]: time="2025-11-05T15:02:24.328031139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:24.330389 containerd[1976]: time="2025-11-05T15:02:24.330183399Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.552005968s" Nov 5 15:02:24.330389 containerd[1976]: time="2025-11-05T15:02:24.330245475Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 5 15:02:24.331590 containerd[1976]: time="2025-11-05T15:02:24.331503363Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 15:02:25.850224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount732197316.mount: Deactivated successfully. Nov 5 15:02:26.455409 containerd[1976]: time="2025-11-05T15:02:26.455341313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:26.457223 containerd[1976]: time="2025-11-05T15:02:26.456305345Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417817" Nov 5 15:02:26.459227 containerd[1976]: time="2025-11-05T15:02:26.458376785Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:26.463241 containerd[1976]: time="2025-11-05T15:02:26.462370697Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:26.464329 containerd[1976]: time="2025-11-05T15:02:26.463793681Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 2.13213991s" Nov 5 15:02:26.464329 containerd[1976]: time="2025-11-05T15:02:26.463864637Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 5 15:02:26.464976 containerd[1976]: time="2025-11-05T15:02:26.464928341Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 15:02:27.094680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1150600429.mount: Deactivated successfully. Nov 5 15:02:28.305545 containerd[1976]: time="2025-11-05T15:02:28.305476507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:28.308376 containerd[1976]: time="2025-11-05T15:02:28.308284531Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Nov 5 15:02:28.309742 containerd[1976]: time="2025-11-05T15:02:28.309660883Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:28.316629 containerd[1976]: time="2025-11-05T15:02:28.316561195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:28.318992 containerd[1976]: time="2025-11-05T15:02:28.318923527Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.85310709s" Nov 5 15:02:28.319265 containerd[1976]: time="2025-11-05T15:02:28.319219411Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 5 15:02:28.320481 containerd[1976]: time="2025-11-05T15:02:28.320406919Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:02:28.898760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3079153266.mount: Deactivated successfully. Nov 5 15:02:28.906941 containerd[1976]: time="2025-11-05T15:02:28.905687674Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:02:28.906941 containerd[1976]: time="2025-11-05T15:02:28.906889510Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 5 15:02:28.907820 containerd[1976]: time="2025-11-05T15:02:28.907783666Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:02:28.912339 containerd[1976]: time="2025-11-05T15:02:28.912291562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:02:28.913413 containerd[1976]: time="2025-11-05T15:02:28.913354474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 592.693167ms" Nov 5 15:02:28.913511 containerd[1976]: time="2025-11-05T15:02:28.913411354Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 15:02:28.914937 containerd[1976]: time="2025-11-05T15:02:28.914839882Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 15:02:29.604520 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1612640073.mount: Deactivated successfully. Nov 5 15:02:31.883853 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:02:31.891872 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:32.401495 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:32.415282 (kubelet)[2786]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:02:32.542873 kubelet[2786]: E1105 15:02:32.542764 2786 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:02:32.548223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:02:32.548616 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:02:32.550214 systemd[1]: kubelet.service: Consumed 379ms CPU time, 107.3M memory peak. Nov 5 15:02:32.739750 containerd[1976]: time="2025-11-05T15:02:32.739505281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:32.741281 containerd[1976]: time="2025-11-05T15:02:32.741203437Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Nov 5 15:02:32.743644 containerd[1976]: time="2025-11-05T15:02:32.743507029Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:32.751290 containerd[1976]: time="2025-11-05T15:02:32.751123321Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:02:32.754024 containerd[1976]: time="2025-11-05T15:02:32.753811261Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.838816483s" Nov 5 15:02:32.754024 containerd[1976]: time="2025-11-05T15:02:32.753875653Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 5 15:02:36.310726 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 15:02:40.948699 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:40.949575 systemd[1]: kubelet.service: Consumed 379ms CPU time, 107.3M memory peak. Nov 5 15:02:40.953514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:41.006967 systemd[1]: Reload requested from client PID 2825 ('systemctl') (unit session-7.scope)... Nov 5 15:02:41.007207 systemd[1]: Reloading... Nov 5 15:02:41.257198 zram_generator::config[2873]: No configuration found. Nov 5 15:02:41.710128 systemd[1]: Reloading finished in 702 ms. Nov 5 15:02:41.807665 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:02:41.807831 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:02:41.808702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:41.808775 systemd[1]: kubelet.service: Consumed 226ms CPU time, 95.1M memory peak. Nov 5 15:02:41.813598 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:42.142676 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:42.155647 (kubelet)[2934]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:02:42.228544 kubelet[2934]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:02:42.230177 kubelet[2934]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:02:42.230177 kubelet[2934]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:02:42.230177 kubelet[2934]: I1105 15:02:42.229144 2934 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:02:43.743802 kubelet[2934]: I1105 15:02:43.743743 2934 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:02:43.743802 kubelet[2934]: I1105 15:02:43.743794 2934 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:02:43.744474 kubelet[2934]: I1105 15:02:43.744308 2934 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:02:43.801073 kubelet[2934]: E1105 15:02:43.801023 2934 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.21.83:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:43.804384 kubelet[2934]: I1105 15:02:43.804320 2934 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:02:43.817932 kubelet[2934]: I1105 15:02:43.817883 2934 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:02:43.825213 kubelet[2934]: I1105 15:02:43.824359 2934 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:02:43.825213 kubelet[2934]: I1105 15:02:43.824825 2934 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:02:43.825213 kubelet[2934]: I1105 15:02:43.824868 2934 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:02:43.825567 kubelet[2934]: I1105 15:02:43.825316 2934 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:02:43.825567 kubelet[2934]: I1105 15:02:43.825339 2934 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:02:43.825705 kubelet[2934]: I1105 15:02:43.825672 2934 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:02:43.831846 kubelet[2934]: I1105 15:02:43.831653 2934 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:02:43.831846 kubelet[2934]: I1105 15:02:43.831701 2934 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:02:43.831846 kubelet[2934]: I1105 15:02:43.831744 2934 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:02:43.831846 kubelet[2934]: I1105 15:02:43.831764 2934 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:02:43.840900 kubelet[2934]: W1105 15:02:43.840814 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-83&limit=500&resourceVersion=0": dial tcp 172.31.21.83:6443: connect: connection refused Nov 5 15:02:43.842196 kubelet[2934]: E1105 15:02:43.841095 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-83&limit=500&resourceVersion=0\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:43.842196 kubelet[2934]: I1105 15:02:43.841252 2934 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:02:43.842538 kubelet[2934]: I1105 15:02:43.842510 2934 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:02:43.842835 kubelet[2934]: W1105 15:02:43.842816 2934 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:02:43.846797 kubelet[2934]: I1105 15:02:43.846754 2934 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:02:43.847019 kubelet[2934]: I1105 15:02:43.847001 2934 server.go:1287] "Started kubelet" Nov 5 15:02:43.859652 kubelet[2934]: I1105 15:02:43.859616 2934 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:02:43.866519 kubelet[2934]: W1105 15:02:43.866426 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.21.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.21.83:6443: connect: connection refused Nov 5 15:02:43.866676 kubelet[2934]: E1105 15:02:43.866558 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.21.83:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:43.867683 kubelet[2934]: I1105 15:02:43.867644 2934 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:02:43.872641 kubelet[2934]: I1105 15:02:43.872603 2934 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:02:43.873396 kubelet[2934]: E1105 15:02:43.873360 2934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-83\" not found" Nov 5 15:02:43.874056 kubelet[2934]: E1105 15:02:43.867803 2934 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.21.83:6443/api/v1/namespaces/default/events\": dial tcp 172.31.21.83:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-21-83.18752480943e69ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-83,UID:ip-172-31-21-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-83,},FirstTimestamp:2025-11-05 15:02:43.846949356 +0000 UTC m=+1.683427318,LastTimestamp:2025-11-05 15:02:43.846949356 +0000 UTC m=+1.683427318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-83,}" Nov 5 15:02:43.874807 kubelet[2934]: I1105 15:02:43.874778 2934 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:02:43.875039 kubelet[2934]: I1105 15:02:43.875018 2934 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:02:43.875525 kubelet[2934]: I1105 15:02:43.875455 2934 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:02:43.876000 kubelet[2934]: E1105 15:02:43.875958 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-83?timeout=10s\": dial tcp 172.31.21.83:6443: connect: connection refused" interval="200ms" Nov 5 15:02:43.879209 kubelet[2934]: W1105 15:02:43.877509 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.21.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.21.83:6443: connect: connection refused Nov 5 15:02:43.879209 kubelet[2934]: E1105 15:02:43.877603 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.21.83:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:43.879209 kubelet[2934]: I1105 15:02:43.877846 2934 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:02:43.879209 kubelet[2934]: I1105 15:02:43.878004 2934 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:02:43.879209 kubelet[2934]: I1105 15:02:43.878810 2934 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:02:43.879619 kubelet[2934]: I1105 15:02:43.879135 2934 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:02:43.880047 kubelet[2934]: I1105 15:02:43.880021 2934 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:02:43.880649 kubelet[2934]: E1105 15:02:43.880616 2934 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:02:43.882767 kubelet[2934]: I1105 15:02:43.882712 2934 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:02:43.901962 kubelet[2934]: I1105 15:02:43.901907 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:02:43.904600 kubelet[2934]: I1105 15:02:43.904557 2934 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:02:43.904783 kubelet[2934]: I1105 15:02:43.904762 2934 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:02:43.904919 kubelet[2934]: I1105 15:02:43.904900 2934 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:02:43.905176 kubelet[2934]: I1105 15:02:43.905139 2934 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:02:43.905367 kubelet[2934]: E1105 15:02:43.905330 2934 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:02:43.916940 kubelet[2934]: W1105 15:02:43.916847 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.21.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.21.83:6443: connect: connection refused Nov 5 15:02:43.917082 kubelet[2934]: E1105 15:02:43.916953 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.21.83:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:43.924495 kubelet[2934]: I1105 15:02:43.924429 2934 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:02:43.924495 kubelet[2934]: I1105 15:02:43.924489 2934 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:02:43.924706 kubelet[2934]: I1105 15:02:43.924525 2934 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:02:43.926547 kubelet[2934]: I1105 15:02:43.926504 2934 policy_none.go:49] "None policy: Start" Nov 5 15:02:43.926547 kubelet[2934]: I1105 15:02:43.926546 2934 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:02:43.926707 kubelet[2934]: I1105 15:02:43.926570 2934 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:02:43.937058 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:02:43.954707 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:02:43.963744 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:02:43.974046 kubelet[2934]: E1105 15:02:43.973972 2934 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-83\" not found" Nov 5 15:02:43.976938 kubelet[2934]: I1105 15:02:43.976905 2934 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:02:43.978260 kubelet[2934]: I1105 15:02:43.978085 2934 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:02:43.978260 kubelet[2934]: I1105 15:02:43.978113 2934 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:02:43.979306 kubelet[2934]: I1105 15:02:43.979196 2934 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:02:43.980738 kubelet[2934]: E1105 15:02:43.980423 2934 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:02:43.980738 kubelet[2934]: E1105 15:02:43.980489 2934 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-21-83\" not found" Nov 5 15:02:44.024246 systemd[1]: Created slice kubepods-burstable-pod0fba8bfe35679452e30d24414ec7e796.slice - libcontainer container kubepods-burstable-pod0fba8bfe35679452e30d24414ec7e796.slice. Nov 5 15:02:44.050179 kubelet[2934]: E1105 15:02:44.049860 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:44.056001 systemd[1]: Created slice kubepods-burstable-pod6068aefbd0f25a0ea3a310a3bde323c0.slice - libcontainer container kubepods-burstable-pod6068aefbd0f25a0ea3a310a3bde323c0.slice. Nov 5 15:02:44.071204 kubelet[2934]: E1105 15:02:44.070567 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:44.076493 systemd[1]: Created slice kubepods-burstable-pod7c5eb875aa94dd98920a7fd561b683fb.slice - libcontainer container kubepods-burstable-pod7c5eb875aa94dd98920a7fd561b683fb.slice. Nov 5 15:02:44.077270 kubelet[2934]: I1105 15:02:44.077235 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:44.077451 kubelet[2934]: I1105 15:02:44.077421 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:44.077582 kubelet[2934]: I1105 15:02:44.077558 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:44.077707 kubelet[2934]: I1105 15:02:44.077684 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:44.077833 kubelet[2934]: I1105 15:02:44.077809 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c5eb875aa94dd98920a7fd561b683fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-83\" (UID: \"7c5eb875aa94dd98920a7fd561b683fb\") " pod="kube-system/kube-scheduler-ip-172-31-21-83" Nov 5 15:02:44.077983 kubelet[2934]: I1105 15:02:44.077946 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-ca-certs\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:44.078109 kubelet[2934]: I1105 15:02:44.078083 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:44.078615 kubelet[2934]: I1105 15:02:44.078230 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:44.078615 kubelet[2934]: I1105 15:02:44.078269 2934 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:44.079340 kubelet[2934]: E1105 15:02:44.079279 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-83?timeout=10s\": dial tcp 172.31.21.83:6443: connect: connection refused" interval="400ms" Nov 5 15:02:44.081144 kubelet[2934]: I1105 15:02:44.081101 2934 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-83" Nov 5 15:02:44.082415 kubelet[2934]: E1105 15:02:44.082332 2934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.83:6443/api/v1/nodes\": dial tcp 172.31.21.83:6443: connect: connection refused" node="ip-172-31-21-83" Nov 5 15:02:44.083755 kubelet[2934]: E1105 15:02:44.083700 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:44.285359 kubelet[2934]: I1105 15:02:44.285145 2934 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-83" Nov 5 15:02:44.286572 kubelet[2934]: E1105 15:02:44.286518 2934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.83:6443/api/v1/nodes\": dial tcp 172.31.21.83:6443: connect: connection refused" node="ip-172-31-21-83" Nov 5 15:02:44.353640 containerd[1976]: time="2025-11-05T15:02:44.353563978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-83,Uid:0fba8bfe35679452e30d24414ec7e796,Namespace:kube-system,Attempt:0,}" Nov 5 15:02:44.374364 containerd[1976]: time="2025-11-05T15:02:44.373486246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-83,Uid:6068aefbd0f25a0ea3a310a3bde323c0,Namespace:kube-system,Attempt:0,}" Nov 5 15:02:44.386269 containerd[1976]: time="2025-11-05T15:02:44.386209738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-83,Uid:7c5eb875aa94dd98920a7fd561b683fb,Namespace:kube-system,Attempt:0,}" Nov 5 15:02:44.396718 containerd[1976]: time="2025-11-05T15:02:44.396557914Z" level=info msg="connecting to shim ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd" address="unix:///run/containerd/s/bff8966fdeb390b82484cdcb0b6c7f3da9ae138acb4fcd3f756b022e7730eea0" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:02:44.455440 containerd[1976]: time="2025-11-05T15:02:44.455371163Z" level=info msg="connecting to shim a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151" address="unix:///run/containerd/s/ecb993a7399f33ea24dc8437ce4633372c5f2df1aa4f13d594b1f90f0267fb28" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:02:44.460452 containerd[1976]: time="2025-11-05T15:02:44.460375619Z" level=info msg="connecting to shim 113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d" address="unix:///run/containerd/s/3d91da675184255c27abeaa8fae8238b17839cf8fcd3249aedfb39cdf4aa5254" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:02:44.478690 systemd[1]: Started cri-containerd-ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd.scope - libcontainer container ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd. Nov 5 15:02:44.484031 kubelet[2934]: E1105 15:02:44.483529 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.21.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-83?timeout=10s\": dial tcp 172.31.21.83:6443: connect: connection refused" interval="800ms" Nov 5 15:02:44.552661 systemd[1]: Started cri-containerd-a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151.scope - libcontainer container a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151. Nov 5 15:02:44.565714 systemd[1]: Started cri-containerd-113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d.scope - libcontainer container 113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d. Nov 5 15:02:44.633497 containerd[1976]: time="2025-11-05T15:02:44.633445980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-21-83,Uid:0fba8bfe35679452e30d24414ec7e796,Namespace:kube-system,Attempt:0,} returns sandbox id \"ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd\"" Nov 5 15:02:44.644233 containerd[1976]: time="2025-11-05T15:02:44.643621632Z" level=info msg="CreateContainer within sandbox \"ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:02:44.665982 containerd[1976]: time="2025-11-05T15:02:44.665862732Z" level=info msg="Container 834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:02:44.690053 containerd[1976]: time="2025-11-05T15:02:44.689986896Z" level=info msg="CreateContainer within sandbox \"ee86ae89ae52887b6b669e55dbcc063ad8bbd089afa0c795172b1ac0fc79abfd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04\"" Nov 5 15:02:44.690930 kubelet[2934]: I1105 15:02:44.690879 2934 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-83" Nov 5 15:02:44.692764 containerd[1976]: time="2025-11-05T15:02:44.692659824Z" level=info msg="StartContainer for \"834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04\"" Nov 5 15:02:44.693553 kubelet[2934]: E1105 15:02:44.693377 2934 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.21.83:6443/api/v1/nodes\": dial tcp 172.31.21.83:6443: connect: connection refused" node="ip-172-31-21-83" Nov 5 15:02:44.702877 containerd[1976]: time="2025-11-05T15:02:44.702807132Z" level=info msg="connecting to shim 834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04" address="unix:///run/containerd/s/bff8966fdeb390b82484cdcb0b6c7f3da9ae138acb4fcd3f756b022e7730eea0" protocol=ttrpc version=3 Nov 5 15:02:44.706433 containerd[1976]: time="2025-11-05T15:02:44.706362984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-21-83,Uid:6068aefbd0f25a0ea3a310a3bde323c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151\"" Nov 5 15:02:44.712852 containerd[1976]: time="2025-11-05T15:02:44.712801980Z" level=info msg="CreateContainer within sandbox \"a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:02:44.724839 containerd[1976]: time="2025-11-05T15:02:44.724777560Z" level=info msg="Container 203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:02:44.746208 containerd[1976]: time="2025-11-05T15:02:44.745621272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-21-83,Uid:7c5eb875aa94dd98920a7fd561b683fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d\"" Nov 5 15:02:44.752206 containerd[1976]: time="2025-11-05T15:02:44.751995252Z" level=info msg="CreateContainer within sandbox \"a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\"" Nov 5 15:02:44.756977 containerd[1976]: time="2025-11-05T15:02:44.756893964Z" level=info msg="CreateContainer within sandbox \"113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:02:44.757884 containerd[1976]: time="2025-11-05T15:02:44.757781856Z" level=info msg="StartContainer for \"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\"" Nov 5 15:02:44.759498 systemd[1]: Started cri-containerd-834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04.scope - libcontainer container 834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04. Nov 5 15:02:44.766558 containerd[1976]: time="2025-11-05T15:02:44.766123224Z" level=info msg="connecting to shim 203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1" address="unix:///run/containerd/s/ecb993a7399f33ea24dc8437ce4633372c5f2df1aa4f13d594b1f90f0267fb28" protocol=ttrpc version=3 Nov 5 15:02:44.782641 containerd[1976]: time="2025-11-05T15:02:44.782571444Z" level=info msg="Container a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:02:44.806546 containerd[1976]: time="2025-11-05T15:02:44.805652784Z" level=info msg="CreateContainer within sandbox \"113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\"" Nov 5 15:02:44.809877 containerd[1976]: time="2025-11-05T15:02:44.809808841Z" level=info msg="StartContainer for \"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\"" Nov 5 15:02:44.817303 containerd[1976]: time="2025-11-05T15:02:44.817072153Z" level=info msg="connecting to shim a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45" address="unix:///run/containerd/s/3d91da675184255c27abeaa8fae8238b17839cf8fcd3249aedfb39cdf4aa5254" protocol=ttrpc version=3 Nov 5 15:02:44.838548 systemd[1]: Started cri-containerd-203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1.scope - libcontainer container 203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1. Nov 5 15:02:44.881391 systemd[1]: Started cri-containerd-a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45.scope - libcontainer container a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45. Nov 5 15:02:44.984466 kubelet[2934]: W1105 15:02:44.984363 2934 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.21.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-83&limit=500&resourceVersion=0": dial tcp 172.31.21.83:6443: connect: connection refused Nov 5 15:02:44.986001 kubelet[2934]: E1105 15:02:44.984485 2934 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.21.83:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-21-83&limit=500&resourceVersion=0\": dial tcp 172.31.21.83:6443: connect: connection refused" logger="UnhandledError" Nov 5 15:02:44.986765 containerd[1976]: time="2025-11-05T15:02:44.986231041Z" level=info msg="StartContainer for \"834cc9b280bca35d0f73c0d1f14fcbcae0eeb308078941d83dd5b03c3f5e2f04\" returns successfully" Nov 5 15:02:45.085384 containerd[1976]: time="2025-11-05T15:02:45.083612386Z" level=info msg="StartContainer for \"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\" returns successfully" Nov 5 15:02:45.101092 containerd[1976]: time="2025-11-05T15:02:45.101018014Z" level=info msg="StartContainer for \"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\" returns successfully" Nov 5 15:02:45.500437 kubelet[2934]: I1105 15:02:45.500394 2934 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-83" Nov 5 15:02:45.994196 kubelet[2934]: E1105 15:02:45.993643 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:46.005267 kubelet[2934]: E1105 15:02:46.005223 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:46.015561 kubelet[2934]: E1105 15:02:46.015488 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:47.017672 kubelet[2934]: E1105 15:02:47.017406 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:47.020442 kubelet[2934]: E1105 15:02:47.020395 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:47.021248 kubelet[2934]: E1105 15:02:47.021204 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:48.019986 kubelet[2934]: E1105 15:02:48.019719 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:48.022811 kubelet[2934]: E1105 15:02:48.020435 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:48.023337 kubelet[2934]: E1105 15:02:48.022867 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:49.089240 kubelet[2934]: E1105 15:02:49.089186 2934 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-21-83\" not found" node="ip-172-31-21-83" Nov 5 15:02:49.483831 kubelet[2934]: I1105 15:02:49.483470 2934 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-83" Nov 5 15:02:49.550819 kubelet[2934]: E1105 15:02:49.550639 2934 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-21-83.18752480943e69ec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-21-83,UID:ip-172-31-21-83,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-21-83,},FirstTimestamp:2025-11-05 15:02:43.846949356 +0000 UTC m=+1.683427318,LastTimestamp:2025-11-05 15:02:43.846949356 +0000 UTC m=+1.683427318,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-21-83,}" Nov 5 15:02:49.574721 kubelet[2934]: I1105 15:02:49.574297 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:49.594845 kubelet[2934]: E1105 15:02:49.594794 2934 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="1.6s" Nov 5 15:02:49.616081 kubelet[2934]: E1105 15:02:49.616038 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-21-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:49.616641 kubelet[2934]: I1105 15:02:49.616303 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:49.623662 kubelet[2934]: E1105 15:02:49.623618 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-21-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:49.624179 kubelet[2934]: I1105 15:02:49.623859 2934 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-83" Nov 5 15:02:49.630620 kubelet[2934]: E1105 15:02:49.630577 2934 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-21-83\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-21-83" Nov 5 15:02:49.860038 kubelet[2934]: I1105 15:02:49.857966 2934 apiserver.go:52] "Watching apiserver" Nov 5 15:02:49.875516 kubelet[2934]: I1105 15:02:49.875453 2934 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:02:49.875926 update_engine[1960]: I20251105 15:02:49.875837 1960 update_attempter.cc:509] Updating boot flags... Nov 5 15:02:52.154746 systemd[1]: Reload requested from client PID 3300 ('systemctl') (unit session-7.scope)... Nov 5 15:02:52.154775 systemd[1]: Reloading... Nov 5 15:02:52.396219 zram_generator::config[3348]: No configuration found. Nov 5 15:02:52.966474 systemd[1]: Reloading finished in 810 ms. Nov 5 15:02:53.011647 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:53.025891 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:02:53.026539 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:53.026651 systemd[1]: kubelet.service: Consumed 2.527s CPU time, 127.3M memory peak. Nov 5 15:02:53.032653 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:02:53.420797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:02:53.442783 (kubelet)[3405]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:02:53.560930 kubelet[3405]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:02:53.560930 kubelet[3405]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:02:53.560930 kubelet[3405]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:02:53.561502 kubelet[3405]: I1105 15:02:53.561249 3405 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:02:53.584198 kubelet[3405]: I1105 15:02:53.583844 3405 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 15:02:53.584198 kubelet[3405]: I1105 15:02:53.583905 3405 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:02:53.584762 kubelet[3405]: I1105 15:02:53.584708 3405 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 15:02:53.597297 kubelet[3405]: I1105 15:02:53.597224 3405 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 15:02:53.604610 kubelet[3405]: I1105 15:02:53.604551 3405 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:02:53.619266 kubelet[3405]: I1105 15:02:53.618774 3405 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:02:53.636552 kubelet[3405]: I1105 15:02:53.636494 3405 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:02:53.637361 kubelet[3405]: I1105 15:02:53.637268 3405 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:02:53.637741 kubelet[3405]: I1105 15:02:53.637364 3405 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-21-83","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:02:53.637930 kubelet[3405]: I1105 15:02:53.637764 3405 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:02:53.637930 kubelet[3405]: I1105 15:02:53.637788 3405 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 15:02:53.637930 kubelet[3405]: I1105 15:02:53.637875 3405 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:02:53.639523 kubelet[3405]: I1105 15:02:53.638279 3405 kubelet.go:446] "Attempting to sync node with API server" Nov 5 15:02:53.639523 kubelet[3405]: I1105 15:02:53.638329 3405 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:02:53.639523 kubelet[3405]: I1105 15:02:53.638376 3405 kubelet.go:352] "Adding apiserver pod source" Nov 5 15:02:53.639523 kubelet[3405]: I1105 15:02:53.638398 3405 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:02:53.642177 kubelet[3405]: I1105 15:02:53.642015 3405 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:02:53.644777 kubelet[3405]: I1105 15:02:53.644060 3405 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 15:02:53.646876 kubelet[3405]: I1105 15:02:53.646788 3405 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:02:53.647264 kubelet[3405]: I1105 15:02:53.647117 3405 server.go:1287] "Started kubelet" Nov 5 15:02:53.655508 kubelet[3405]: I1105 15:02:53.655462 3405 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:02:53.667450 kubelet[3405]: I1105 15:02:53.667025 3405 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:02:53.675992 kubelet[3405]: I1105 15:02:53.674398 3405 server.go:479] "Adding debug handlers to kubelet server" Nov 5 15:02:53.684572 kubelet[3405]: I1105 15:02:53.684467 3405 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:02:53.685198 kubelet[3405]: I1105 15:02:53.685107 3405 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:02:53.686783 kubelet[3405]: E1105 15:02:53.685737 3405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-83\" not found" Nov 5 15:02:53.688037 kubelet[3405]: I1105 15:02:53.687977 3405 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:02:53.703378 kubelet[3405]: I1105 15:02:53.703328 3405 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:02:53.708458 kubelet[3405]: I1105 15:02:53.708412 3405 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:02:53.710703 kubelet[3405]: I1105 15:02:53.708914 3405 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:02:53.710703 kubelet[3405]: I1105 15:02:53.710503 3405 factory.go:221] Registration of the systemd container factory successfully Nov 5 15:02:53.714225 kubelet[3405]: E1105 15:02:53.712698 3405 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:02:53.714655 kubelet[3405]: I1105 15:02:53.713654 3405 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:02:53.724481 kubelet[3405]: I1105 15:02:53.724436 3405 factory.go:221] Registration of the containerd container factory successfully Nov 5 15:02:53.788187 kubelet[3405]: E1105 15:02:53.787301 3405 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-21-83\" not found" Nov 5 15:02:53.792320 kubelet[3405]: I1105 15:02:53.792253 3405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 15:02:53.796019 kubelet[3405]: I1105 15:02:53.795934 3405 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 15:02:53.796019 kubelet[3405]: I1105 15:02:53.795997 3405 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 15:02:53.796321 kubelet[3405]: I1105 15:02:53.796042 3405 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:02:53.796321 kubelet[3405]: I1105 15:02:53.796059 3405 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 15:02:53.796321 kubelet[3405]: E1105 15:02:53.796140 3405 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:02:53.896473 kubelet[3405]: E1105 15:02:53.896433 3405 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:02:53.937686 kubelet[3405]: I1105 15:02:53.936783 3405 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:02:53.937978 kubelet[3405]: I1105 15:02:53.937947 3405 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:02:53.938169 kubelet[3405]: I1105 15:02:53.938090 3405 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:02:53.938882 kubelet[3405]: I1105 15:02:53.938824 3405 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:02:53.939099 kubelet[3405]: I1105 15:02:53.939032 3405 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:02:53.939308 kubelet[3405]: I1105 15:02:53.939290 3405 policy_none.go:49] "None policy: Start" Nov 5 15:02:53.939419 kubelet[3405]: I1105 15:02:53.939402 3405 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:02:53.939894 kubelet[3405]: I1105 15:02:53.939504 3405 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:02:53.939894 kubelet[3405]: I1105 15:02:53.939773 3405 state_mem.go:75] "Updated machine memory state" Nov 5 15:02:53.958474 kubelet[3405]: I1105 15:02:53.955513 3405 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 15:02:53.960127 kubelet[3405]: I1105 15:02:53.959513 3405 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:02:53.961203 kubelet[3405]: I1105 15:02:53.960361 3405 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:02:53.967220 kubelet[3405]: I1105 15:02:53.966421 3405 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:02:53.970078 kubelet[3405]: E1105 15:02:53.969708 3405 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:02:54.088764 kubelet[3405]: I1105 15:02:54.088675 3405 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-21-83" Nov 5 15:02:54.099611 kubelet[3405]: I1105 15:02:54.098892 3405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.100120 kubelet[3405]: I1105 15:02:54.100073 3405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:54.103803 kubelet[3405]: I1105 15:02:54.103767 3405 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-21-83" Nov 5 15:02:54.116239 kubelet[3405]: I1105 15:02:54.115996 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c5eb875aa94dd98920a7fd561b683fb-kubeconfig\") pod \"kube-scheduler-ip-172-31-21-83\" (UID: \"7c5eb875aa94dd98920a7fd561b683fb\") " pod="kube-system/kube-scheduler-ip-172-31-21-83" Nov 5 15:02:54.117345 kubelet[3405]: I1105 15:02:54.117294 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:54.117884 kubelet[3405]: I1105 15:02:54.117570 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-k8s-certs\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:54.117884 kubelet[3405]: I1105 15:02:54.117622 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-ca-certs\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.117884 kubelet[3405]: I1105 15:02:54.117663 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.117884 kubelet[3405]: I1105 15:02:54.117722 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-k8s-certs\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.117884 kubelet[3405]: I1105 15:02:54.117761 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-kubeconfig\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.118210 kubelet[3405]: I1105 15:02:54.117798 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6068aefbd0f25a0ea3a310a3bde323c0-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-21-83\" (UID: \"6068aefbd0f25a0ea3a310a3bde323c0\") " pod="kube-system/kube-controller-manager-ip-172-31-21-83" Nov 5 15:02:54.118210 kubelet[3405]: I1105 15:02:54.117837 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0fba8bfe35679452e30d24414ec7e796-ca-certs\") pod \"kube-apiserver-ip-172-31-21-83\" (UID: \"0fba8bfe35679452e30d24414ec7e796\") " pod="kube-system/kube-apiserver-ip-172-31-21-83" Nov 5 15:02:54.140507 kubelet[3405]: I1105 15:02:54.140224 3405 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-21-83" Nov 5 15:02:54.142785 kubelet[3405]: I1105 15:02:54.142360 3405 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-21-83" Nov 5 15:02:54.640839 kubelet[3405]: I1105 15:02:54.639831 3405 apiserver.go:52] "Watching apiserver" Nov 5 15:02:54.711690 kubelet[3405]: I1105 15:02:54.711217 3405 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:02:54.793495 kubelet[3405]: I1105 15:02:54.793368 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-21-83" podStartSLOduration=0.793344094 podStartE2EDuration="793.344094ms" podCreationTimestamp="2025-11-05 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:02:54.772595806 +0000 UTC m=+1.316537456" watchObservedRunningTime="2025-11-05 15:02:54.793344094 +0000 UTC m=+1.337285708" Nov 5 15:02:54.793715 kubelet[3405]: I1105 15:02:54.793643 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-21-83" podStartSLOduration=0.793550434 podStartE2EDuration="793.550434ms" podCreationTimestamp="2025-11-05 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:02:54.793343938 +0000 UTC m=+1.337285576" watchObservedRunningTime="2025-11-05 15:02:54.793550434 +0000 UTC m=+1.337492048" Nov 5 15:02:54.840602 kubelet[3405]: I1105 15:02:54.840489 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-21-83" podStartSLOduration=0.840464278 podStartE2EDuration="840.464278ms" podCreationTimestamp="2025-11-05 15:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:02:54.813757018 +0000 UTC m=+1.357698656" watchObservedRunningTime="2025-11-05 15:02:54.840464278 +0000 UTC m=+1.384405904" Nov 5 15:02:58.081185 kubelet[3405]: I1105 15:02:58.080906 3405 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:02:58.082535 containerd[1976]: time="2025-11-05T15:02:58.082471834Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:02:58.083446 kubelet[3405]: I1105 15:02:58.082920 3405 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:02:58.706115 systemd[1]: Created slice kubepods-besteffort-pod0ae852ee_6e24_4eec_aca4_242b19ce8c21.slice - libcontainer container kubepods-besteffort-pod0ae852ee_6e24_4eec_aca4_242b19ce8c21.slice. Nov 5 15:02:58.748581 kubelet[3405]: I1105 15:02:58.748517 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0ae852ee-6e24-4eec-aca4-242b19ce8c21-xtables-lock\") pod \"kube-proxy-7nvrm\" (UID: \"0ae852ee-6e24-4eec-aca4-242b19ce8c21\") " pod="kube-system/kube-proxy-7nvrm" Nov 5 15:02:58.748581 kubelet[3405]: I1105 15:02:58.748593 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp8m9\" (UniqueName: \"kubernetes.io/projected/0ae852ee-6e24-4eec-aca4-242b19ce8c21-kube-api-access-tp8m9\") pod \"kube-proxy-7nvrm\" (UID: \"0ae852ee-6e24-4eec-aca4-242b19ce8c21\") " pod="kube-system/kube-proxy-7nvrm" Nov 5 15:02:58.749121 kubelet[3405]: I1105 15:02:58.748697 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0ae852ee-6e24-4eec-aca4-242b19ce8c21-kube-proxy\") pod \"kube-proxy-7nvrm\" (UID: \"0ae852ee-6e24-4eec-aca4-242b19ce8c21\") " pod="kube-system/kube-proxy-7nvrm" Nov 5 15:02:58.749121 kubelet[3405]: I1105 15:02:58.748767 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0ae852ee-6e24-4eec-aca4-242b19ce8c21-lib-modules\") pod \"kube-proxy-7nvrm\" (UID: \"0ae852ee-6e24-4eec-aca4-242b19ce8c21\") " pod="kube-system/kube-proxy-7nvrm" Nov 5 15:02:59.028231 containerd[1976]: time="2025-11-05T15:02:59.027523115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nvrm,Uid:0ae852ee-6e24-4eec-aca4-242b19ce8c21,Namespace:kube-system,Attempt:0,}" Nov 5 15:02:59.084521 containerd[1976]: time="2025-11-05T15:02:59.084433847Z" level=info msg="connecting to shim 78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709" address="unix:///run/containerd/s/2f318f990fbef6a0194dbe99c47c206bc3e2eff8c857b92fce38d043064ce1be" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:02:59.163914 systemd[1]: Started cri-containerd-78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709.scope - libcontainer container 78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709. Nov 5 15:02:59.277444 systemd[1]: Created slice kubepods-besteffort-pod1ebaf9fe_8489_4f5e_acc3_e48ad95213f0.slice - libcontainer container kubepods-besteffort-pod1ebaf9fe_8489_4f5e_acc3_e48ad95213f0.slice. Nov 5 15:02:59.282237 containerd[1976]: time="2025-11-05T15:02:59.281717496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7nvrm,Uid:0ae852ee-6e24-4eec-aca4-242b19ce8c21,Namespace:kube-system,Attempt:0,} returns sandbox id \"78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709\"" Nov 5 15:02:59.307295 containerd[1976]: time="2025-11-05T15:02:59.307101661Z" level=info msg="CreateContainer within sandbox \"78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:02:59.334707 containerd[1976]: time="2025-11-05T15:02:59.334645885Z" level=info msg="Container 1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:02:59.340925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2174001339.mount: Deactivated successfully. Nov 5 15:02:59.353755 kubelet[3405]: I1105 15:02:59.353673 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1ebaf9fe-8489-4f5e-acc3-e48ad95213f0-var-lib-calico\") pod \"tigera-operator-7dcd859c48-l79gq\" (UID: \"1ebaf9fe-8489-4f5e-acc3-e48ad95213f0\") " pod="tigera-operator/tigera-operator-7dcd859c48-l79gq" Nov 5 15:02:59.353755 kubelet[3405]: I1105 15:02:59.353757 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdcs2\" (UniqueName: \"kubernetes.io/projected/1ebaf9fe-8489-4f5e-acc3-e48ad95213f0-kube-api-access-zdcs2\") pod \"tigera-operator-7dcd859c48-l79gq\" (UID: \"1ebaf9fe-8489-4f5e-acc3-e48ad95213f0\") " pod="tigera-operator/tigera-operator-7dcd859c48-l79gq" Nov 5 15:02:59.358807 containerd[1976]: time="2025-11-05T15:02:59.358740985Z" level=info msg="CreateContainer within sandbox \"78b4f5381e3d5a623faa6facebb931ded1db37c2535df5664889653a5c53e709\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44\"" Nov 5 15:02:59.361535 containerd[1976]: time="2025-11-05T15:02:59.361401409Z" level=info msg="StartContainer for \"1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44\"" Nov 5 15:02:59.368744 containerd[1976]: time="2025-11-05T15:02:59.368550445Z" level=info msg="connecting to shim 1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44" address="unix:///run/containerd/s/2f318f990fbef6a0194dbe99c47c206bc3e2eff8c857b92fce38d043064ce1be" protocol=ttrpc version=3 Nov 5 15:02:59.407532 systemd[1]: Started cri-containerd-1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44.scope - libcontainer container 1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44. Nov 5 15:02:59.518090 containerd[1976]: time="2025-11-05T15:02:59.517943138Z" level=info msg="StartContainer for \"1562ad714c103253652fe9000826e803b7e86c4e9cc140a70dbdcedd9458bd44\" returns successfully" Nov 5 15:02:59.596927 containerd[1976]: time="2025-11-05T15:02:59.596782682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l79gq,Uid:1ebaf9fe-8489-4f5e-acc3-e48ad95213f0,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:02:59.650681 containerd[1976]: time="2025-11-05T15:02:59.650598866Z" level=info msg="connecting to shim c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a" address="unix:///run/containerd/s/5e16e146cce79b87822bd91974cb906fbc79433c580254e1da6a3b9fc7246508" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:02:59.711461 systemd[1]: Started cri-containerd-c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a.scope - libcontainer container c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a. Nov 5 15:02:59.838591 containerd[1976]: time="2025-11-05T15:02:59.838438863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-l79gq,Uid:1ebaf9fe-8489-4f5e-acc3-e48ad95213f0,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a\"" Nov 5 15:02:59.842748 containerd[1976]: time="2025-11-05T15:02:59.842334567Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:03:01.158875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount52675869.mount: Deactivated successfully. Nov 5 15:03:01.268527 kubelet[3405]: I1105 15:03:01.268435 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7nvrm" podStartSLOduration=3.268395602 podStartE2EDuration="3.268395602s" podCreationTimestamp="2025-11-05 15:02:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:02:59.951403564 +0000 UTC m=+6.495345214" watchObservedRunningTime="2025-11-05 15:03:01.268395602 +0000 UTC m=+7.812337216" Nov 5 15:03:02.119515 containerd[1976]: time="2025-11-05T15:03:02.119445374Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:02.122092 containerd[1976]: time="2025-11-05T15:03:02.122022950Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 15:03:02.123275 containerd[1976]: time="2025-11-05T15:03:02.123220899Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:02.127242 containerd[1976]: time="2025-11-05T15:03:02.126308631Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:02.128700 containerd[1976]: time="2025-11-05T15:03:02.128644167Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.286167268s" Nov 5 15:03:02.128904 containerd[1976]: time="2025-11-05T15:03:02.128872395Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 15:03:02.137078 containerd[1976]: time="2025-11-05T15:03:02.137010759Z" level=info msg="CreateContainer within sandbox \"c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:03:02.166417 containerd[1976]: time="2025-11-05T15:03:02.166338807Z" level=info msg="Container be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:02.175718 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2996444830.mount: Deactivated successfully. Nov 5 15:03:02.183506 containerd[1976]: time="2025-11-05T15:03:02.183411375Z" level=info msg="CreateContainer within sandbox \"c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\"" Nov 5 15:03:02.185205 containerd[1976]: time="2025-11-05T15:03:02.184908111Z" level=info msg="StartContainer for \"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\"" Nov 5 15:03:02.188559 containerd[1976]: time="2025-11-05T15:03:02.188469423Z" level=info msg="connecting to shim be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74" address="unix:///run/containerd/s/5e16e146cce79b87822bd91974cb906fbc79433c580254e1da6a3b9fc7246508" protocol=ttrpc version=3 Nov 5 15:03:02.232491 systemd[1]: Started cri-containerd-be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74.scope - libcontainer container be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74. Nov 5 15:03:02.296561 containerd[1976]: time="2025-11-05T15:03:02.296478771Z" level=info msg="StartContainer for \"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\" returns successfully" Nov 5 15:03:03.896020 kubelet[3405]: I1105 15:03:03.895845 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-l79gq" podStartSLOduration=2.605034171 podStartE2EDuration="4.895819315s" podCreationTimestamp="2025-11-05 15:02:59 +0000 UTC" firstStartedPulling="2025-11-05 15:02:59.840867651 +0000 UTC m=+6.384809277" lastFinishedPulling="2025-11-05 15:03:02.131652795 +0000 UTC m=+8.675594421" observedRunningTime="2025-11-05 15:03:02.935640595 +0000 UTC m=+9.479582245" watchObservedRunningTime="2025-11-05 15:03:03.895819315 +0000 UTC m=+10.439760929" Nov 5 15:03:09.419619 sudo[2353]: pam_unix(sudo:session): session closed for user root Nov 5 15:03:09.447339 sshd[2352]: Connection closed by 139.178.89.65 port 57638 Nov 5 15:03:09.448424 sshd-session[2349]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:09.462932 systemd[1]: sshd@6-172.31.21.83:22-139.178.89.65:57638.service: Deactivated successfully. Nov 5 15:03:09.474605 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:03:09.476092 systemd[1]: session-7.scope: Consumed 11.910s CPU time, 223.6M memory peak. Nov 5 15:03:09.481271 systemd-logind[1959]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:03:09.490312 systemd-logind[1959]: Removed session 7. Nov 5 15:03:29.671765 systemd[1]: Created slice kubepods-besteffort-podeb4ad1ea_5460_45c2_983c_00f98e41cd7e.slice - libcontainer container kubepods-besteffort-podeb4ad1ea_5460_45c2_983c_00f98e41cd7e.slice. Nov 5 15:03:29.767802 kubelet[3405]: I1105 15:03:29.767584 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eb4ad1ea-5460-45c2-983c-00f98e41cd7e-typha-certs\") pod \"calico-typha-57899b4d6f-gl977\" (UID: \"eb4ad1ea-5460-45c2-983c-00f98e41cd7e\") " pod="calico-system/calico-typha-57899b4d6f-gl977" Nov 5 15:03:29.769791 kubelet[3405]: I1105 15:03:29.769261 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eb4ad1ea-5460-45c2-983c-00f98e41cd7e-tigera-ca-bundle\") pod \"calico-typha-57899b4d6f-gl977\" (UID: \"eb4ad1ea-5460-45c2-983c-00f98e41cd7e\") " pod="calico-system/calico-typha-57899b4d6f-gl977" Nov 5 15:03:29.769791 kubelet[3405]: I1105 15:03:29.769328 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4rfd\" (UniqueName: \"kubernetes.io/projected/eb4ad1ea-5460-45c2-983c-00f98e41cd7e-kube-api-access-t4rfd\") pod \"calico-typha-57899b4d6f-gl977\" (UID: \"eb4ad1ea-5460-45c2-983c-00f98e41cd7e\") " pod="calico-system/calico-typha-57899b4d6f-gl977" Nov 5 15:03:29.873109 systemd[1]: Created slice kubepods-besteffort-podb4313213_5438_4645_a930_ce772e9d1d3d.slice - libcontainer container kubepods-besteffort-podb4313213_5438_4645_a930_ce772e9d1d3d.slice. Nov 5 15:03:29.977468 kubelet[3405]: I1105 15:03:29.976721 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-policysync\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977468 kubelet[3405]: I1105 15:03:29.976817 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b4313213-5438-4645-a930-ce772e9d1d3d-tigera-ca-bundle\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977468 kubelet[3405]: I1105 15:03:29.976859 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gncc\" (UniqueName: \"kubernetes.io/projected/b4313213-5438-4645-a930-ce772e9d1d3d-kube-api-access-2gncc\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977468 kubelet[3405]: I1105 15:03:29.976928 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-cni-log-dir\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977468 kubelet[3405]: I1105 15:03:29.977014 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-var-lib-calico\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977803 kubelet[3405]: I1105 15:03:29.977055 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-cni-net-dir\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977803 kubelet[3405]: I1105 15:03:29.977094 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-var-run-calico\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977803 kubelet[3405]: I1105 15:03:29.977171 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-xtables-lock\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977803 kubelet[3405]: I1105 15:03:29.977216 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-lib-modules\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.977803 kubelet[3405]: I1105 15:03:29.977251 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b4313213-5438-4645-a930-ce772e9d1d3d-node-certs\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.978041 kubelet[3405]: I1105 15:03:29.977311 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-cni-bin-dir\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.978041 kubelet[3405]: I1105 15:03:29.977350 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b4313213-5438-4645-a930-ce772e9d1d3d-flexvol-driver-host\") pod \"calico-node-ncrjk\" (UID: \"b4313213-5438-4645-a930-ce772e9d1d3d\") " pod="calico-system/calico-node-ncrjk" Nov 5 15:03:29.992756 containerd[1976]: time="2025-11-05T15:03:29.992686809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57899b4d6f-gl977,Uid:eb4ad1ea-5460-45c2-983c-00f98e41cd7e,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:30.016099 kubelet[3405]: E1105 15:03:30.015733 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:30.065404 containerd[1976]: time="2025-11-05T15:03:30.065329937Z" level=info msg="connecting to shim 759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b" address="unix:///run/containerd/s/60e7cf97885364a83f1c448054d86aaa0666d067a1dde0433a654573aee1c63d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:30.078605 kubelet[3405]: I1105 15:03:30.078478 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef9a0063-5427-4eaf-b6d6-01cd9334db4b-socket-dir\") pod \"csi-node-driver-qgwk8\" (UID: \"ef9a0063-5427-4eaf-b6d6-01cd9334db4b\") " pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:30.080522 kubelet[3405]: I1105 15:03:30.080280 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef9a0063-5427-4eaf-b6d6-01cd9334db4b-kubelet-dir\") pod \"csi-node-driver-qgwk8\" (UID: \"ef9a0063-5427-4eaf-b6d6-01cd9334db4b\") " pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:30.080522 kubelet[3405]: I1105 15:03:30.080365 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef9a0063-5427-4eaf-b6d6-01cd9334db4b-registration-dir\") pod \"csi-node-driver-qgwk8\" (UID: \"ef9a0063-5427-4eaf-b6d6-01cd9334db4b\") " pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:30.084632 kubelet[3405]: I1105 15:03:30.083228 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef9a0063-5427-4eaf-b6d6-01cd9334db4b-varrun\") pod \"csi-node-driver-qgwk8\" (UID: \"ef9a0063-5427-4eaf-b6d6-01cd9334db4b\") " pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:30.084632 kubelet[3405]: I1105 15:03:30.083422 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7pv5\" (UniqueName: \"kubernetes.io/projected/ef9a0063-5427-4eaf-b6d6-01cd9334db4b-kube-api-access-k7pv5\") pod \"csi-node-driver-qgwk8\" (UID: \"ef9a0063-5427-4eaf-b6d6-01cd9334db4b\") " pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:30.101305 kubelet[3405]: E1105 15:03:30.099287 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.101305 kubelet[3405]: W1105 15:03:30.099617 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.101305 kubelet[3405]: E1105 15:03:30.099669 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.101581 kubelet[3405]: E1105 15:03:30.101465 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.101960 kubelet[3405]: W1105 15:03:30.101497 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.102907 kubelet[3405]: E1105 15:03:30.102289 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.106184 kubelet[3405]: E1105 15:03:30.105380 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.106184 kubelet[3405]: W1105 15:03:30.105431 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.106513 kubelet[3405]: E1105 15:03:30.106301 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.106513 kubelet[3405]: W1105 15:03:30.106328 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.107499 kubelet[3405]: E1105 15:03:30.107077 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.107499 kubelet[3405]: E1105 15:03:30.107143 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.109893 kubelet[3405]: E1105 15:03:30.109682 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.109893 kubelet[3405]: W1105 15:03:30.109724 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.111486 kubelet[3405]: E1105 15:03:30.110140 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.115985 kubelet[3405]: E1105 15:03:30.115744 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.115985 kubelet[3405]: W1105 15:03:30.115790 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.117296 kubelet[3405]: E1105 15:03:30.116663 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.118540 kubelet[3405]: E1105 15:03:30.118475 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.119814 kubelet[3405]: W1105 15:03:30.119422 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.120494 kubelet[3405]: E1105 15:03:30.120424 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.124492 kubelet[3405]: E1105 15:03:30.124417 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.124620 kubelet[3405]: W1105 15:03:30.124582 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.124620 kubelet[3405]: E1105 15:03:30.124824 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.127021 kubelet[3405]: E1105 15:03:30.126680 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.127021 kubelet[3405]: W1105 15:03:30.126755 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.128397 kubelet[3405]: E1105 15:03:30.127342 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.129087 kubelet[3405]: E1105 15:03:30.128508 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.129087 kubelet[3405]: W1105 15:03:30.128816 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.129848 kubelet[3405]: E1105 15:03:30.129078 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.133121 kubelet[3405]: E1105 15:03:30.133069 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.133121 kubelet[3405]: W1105 15:03:30.133110 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.135366 kubelet[3405]: E1105 15:03:30.134222 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.138242 kubelet[3405]: E1105 15:03:30.137224 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.138242 kubelet[3405]: W1105 15:03:30.137265 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.138242 kubelet[3405]: E1105 15:03:30.137332 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.138491 kubelet[3405]: E1105 15:03:30.138312 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.138491 kubelet[3405]: W1105 15:03:30.138338 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.142326 kubelet[3405]: E1105 15:03:30.138686 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.142326 kubelet[3405]: E1105 15:03:30.140055 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.142326 kubelet[3405]: W1105 15:03:30.140084 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.142326 kubelet[3405]: E1105 15:03:30.140138 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.143568 kubelet[3405]: E1105 15:03:30.143417 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.143568 kubelet[3405]: W1105 15:03:30.143462 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.143568 kubelet[3405]: E1105 15:03:30.143508 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.148710 kubelet[3405]: E1105 15:03:30.148620 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.149706 kubelet[3405]: W1105 15:03:30.149028 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.149706 kubelet[3405]: E1105 15:03:30.149081 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.186197 kubelet[3405]: E1105 15:03:30.185346 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.186197 kubelet[3405]: W1105 15:03:30.185387 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.186197 kubelet[3405]: E1105 15:03:30.185420 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.187344 kubelet[3405]: E1105 15:03:30.187299 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.188289 kubelet[3405]: W1105 15:03:30.188239 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.188505 kubelet[3405]: E1105 15:03:30.188476 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.189783 systemd[1]: Started cri-containerd-759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b.scope - libcontainer container 759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b. Nov 5 15:03:30.191631 kubelet[3405]: E1105 15:03:30.191490 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.192378 kubelet[3405]: W1105 15:03:30.191939 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.192378 kubelet[3405]: E1105 15:03:30.192110 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.193940 kubelet[3405]: E1105 15:03:30.193887 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.194742 kubelet[3405]: W1105 15:03:30.194110 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.194742 kubelet[3405]: E1105 15:03:30.194185 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.197188 kubelet[3405]: E1105 15:03:30.196413 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.197188 kubelet[3405]: W1105 15:03:30.196457 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.197188 kubelet[3405]: E1105 15:03:30.196494 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.198477 kubelet[3405]: E1105 15:03:30.198437 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.198758 kubelet[3405]: W1105 15:03:30.198687 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.200241 kubelet[3405]: E1105 15:03:30.198956 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.200524 kubelet[3405]: E1105 15:03:30.200490 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.200967 kubelet[3405]: W1105 15:03:30.200798 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.201433 kubelet[3405]: E1105 15:03:30.201243 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.202926 kubelet[3405]: E1105 15:03:30.202777 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.203821 kubelet[3405]: W1105 15:03:30.203312 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.203821 kubelet[3405]: E1105 15:03:30.203367 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.206602 kubelet[3405]: E1105 15:03:30.206458 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.207409 kubelet[3405]: W1105 15:03:30.206951 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.208294 kubelet[3405]: E1105 15:03:30.208096 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.211050 kubelet[3405]: E1105 15:03:30.210298 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.211050 kubelet[3405]: W1105 15:03:30.210341 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.213199 kubelet[3405]: E1105 15:03:30.211898 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.213199 kubelet[3405]: W1105 15:03:30.211934 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.213879 kubelet[3405]: E1105 15:03:30.213832 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.214052 kubelet[3405]: W1105 15:03:30.214021 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.214659 kubelet[3405]: E1105 15:03:30.214592 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.215176 kubelet[3405]: E1105 15:03:30.215002 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.217299 kubelet[3405]: W1105 15:03:30.217221 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.217562 kubelet[3405]: E1105 15:03:30.217520 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.218130 kubelet[3405]: E1105 15:03:30.218082 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.220294 kubelet[3405]: W1105 15:03:30.219879 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.221631 kubelet[3405]: E1105 15:03:30.221492 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.224193 kubelet[3405]: W1105 15:03:30.223836 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.224444 kubelet[3405]: E1105 15:03:30.224406 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.234651 kubelet[3405]: E1105 15:03:30.222308 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.246978 kubelet[3405]: E1105 15:03:30.219853 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.248059 kubelet[3405]: E1105 15:03:30.219831 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.248266 kubelet[3405]: E1105 15:03:30.237265 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.251261 kubelet[3405]: W1105 15:03:30.251193 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.251548 kubelet[3405]: E1105 15:03:30.251439 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.254537 kubelet[3405]: E1105 15:03:30.254494 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.255115 kubelet[3405]: W1105 15:03:30.254770 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.256329 kubelet[3405]: E1105 15:03:30.255972 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.261121 kubelet[3405]: E1105 15:03:30.259916 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.261121 kubelet[3405]: W1105 15:03:30.260056 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.261121 kubelet[3405]: E1105 15:03:30.260653 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.261121 kubelet[3405]: E1105 15:03:30.260708 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.261121 kubelet[3405]: W1105 15:03:30.260761 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.261723 kubelet[3405]: E1105 15:03:30.261655 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.262392 kubelet[3405]: E1105 15:03:30.262219 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.263218 kubelet[3405]: W1105 15:03:30.262674 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.263218 kubelet[3405]: E1105 15:03:30.262752 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.264477 kubelet[3405]: E1105 15:03:30.264432 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.264729 kubelet[3405]: W1105 15:03:30.264691 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.265663 kubelet[3405]: E1105 15:03:30.265559 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.266746 kubelet[3405]: E1105 15:03:30.266678 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.269409 kubelet[3405]: W1105 15:03:30.269337 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.270632 kubelet[3405]: E1105 15:03:30.270322 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.273323 kubelet[3405]: E1105 15:03:30.273246 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.273323 kubelet[3405]: W1105 15:03:30.273290 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.274714 kubelet[3405]: E1105 15:03:30.274459 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.275384 kubelet[3405]: E1105 15:03:30.275056 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.275557 kubelet[3405]: W1105 15:03:30.275382 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.275840 kubelet[3405]: E1105 15:03:30.275574 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.276574 kubelet[3405]: E1105 15:03:30.276528 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.276831 kubelet[3405]: W1105 15:03:30.276794 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.277077 kubelet[3405]: E1105 15:03:30.277029 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.277608 kubelet[3405]: E1105 15:03:30.277575 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.277799 kubelet[3405]: W1105 15:03:30.277770 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.277951 kubelet[3405]: E1105 15:03:30.277923 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.278620 kubelet[3405]: E1105 15:03:30.278584 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:30.278838 kubelet[3405]: W1105 15:03:30.278807 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:30.278993 kubelet[3405]: E1105 15:03:30.278965 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:30.364825 containerd[1976]: time="2025-11-05T15:03:30.364728463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-57899b4d6f-gl977,Uid:eb4ad1ea-5460-45c2-983c-00f98e41cd7e,Namespace:calico-system,Attempt:0,} returns sandbox id \"759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b\"" Nov 5 15:03:30.373448 containerd[1976]: time="2025-11-05T15:03:30.373316563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:03:30.491004 containerd[1976]: time="2025-11-05T15:03:30.489239551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ncrjk,Uid:b4313213-5438-4645-a930-ce772e9d1d3d,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:30.534742 containerd[1976]: time="2025-11-05T15:03:30.534633140Z" level=info msg="connecting to shim 148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6" address="unix:///run/containerd/s/6876a64f5cef3e9e5d924963852bfd8e91d5a8210fa92dd8b2e21eee7cfa9dd5" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:30.586484 systemd[1]: Started cri-containerd-148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6.scope - libcontainer container 148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6. Nov 5 15:03:30.651617 containerd[1976]: time="2025-11-05T15:03:30.651563924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ncrjk,Uid:b4313213-5438-4645-a930-ce772e9d1d3d,Namespace:calico-system,Attempt:0,} returns sandbox id \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\"" Nov 5 15:03:31.796765 kubelet[3405]: E1105 15:03:31.796685 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:32.189403 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount901756670.mount: Deactivated successfully. Nov 5 15:03:33.661634 containerd[1976]: time="2025-11-05T15:03:33.661543883Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.663601 containerd[1976]: time="2025-11-05T15:03:33.663281459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 15:03:33.664795 containerd[1976]: time="2025-11-05T15:03:33.664699943Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.669514 containerd[1976]: time="2025-11-05T15:03:33.669435983Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.673937 containerd[1976]: time="2025-11-05T15:03:33.673858883Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 3.300389608s" Nov 5 15:03:33.674215 containerd[1976]: time="2025-11-05T15:03:33.674135039Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 15:03:33.680031 containerd[1976]: time="2025-11-05T15:03:33.679087151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:03:33.709976 containerd[1976]: time="2025-11-05T15:03:33.709922219Z" level=info msg="CreateContainer within sandbox \"759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:03:33.723106 containerd[1976]: time="2025-11-05T15:03:33.723001103Z" level=info msg="Container d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:33.740968 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount663900430.mount: Deactivated successfully. Nov 5 15:03:33.753990 containerd[1976]: time="2025-11-05T15:03:33.753853992Z" level=info msg="CreateContainer within sandbox \"759926ceeda23efd5364935674b5a03919ec85bda6810d540f32db5427dea30b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787\"" Nov 5 15:03:33.755784 containerd[1976]: time="2025-11-05T15:03:33.755709216Z" level=info msg="StartContainer for \"d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787\"" Nov 5 15:03:33.758873 containerd[1976]: time="2025-11-05T15:03:33.758800200Z" level=info msg="connecting to shim d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787" address="unix:///run/containerd/s/60e7cf97885364a83f1c448054d86aaa0666d067a1dde0433a654573aee1c63d" protocol=ttrpc version=3 Nov 5 15:03:33.797687 kubelet[3405]: E1105 15:03:33.797608 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:33.812510 systemd[1]: Started cri-containerd-d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787.scope - libcontainer container d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787. Nov 5 15:03:33.908715 containerd[1976]: time="2025-11-05T15:03:33.908490336Z" level=info msg="StartContainer for \"d4206244cf15eec5bb510aaa4e20c1294438775c550dc67ba3051b4e8d647787\" returns successfully" Nov 5 15:03:34.060621 kubelet[3405]: E1105 15:03:34.060406 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.060621 kubelet[3405]: W1105 15:03:34.060455 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.060621 kubelet[3405]: E1105 15:03:34.060492 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.063422 kubelet[3405]: E1105 15:03:34.063353 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.063422 kubelet[3405]: W1105 15:03:34.063393 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.064406 kubelet[3405]: E1105 15:03:34.063466 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.064878 kubelet[3405]: E1105 15:03:34.064730 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.064878 kubelet[3405]: W1105 15:03:34.064769 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.064878 kubelet[3405]: E1105 15:03:34.064803 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.066204 kubelet[3405]: E1105 15:03:34.065896 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.066204 kubelet[3405]: W1105 15:03:34.065936 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.066204 kubelet[3405]: E1105 15:03:34.065970 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.068351 kubelet[3405]: E1105 15:03:34.067364 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.068351 kubelet[3405]: W1105 15:03:34.067406 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.068351 kubelet[3405]: E1105 15:03:34.067439 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.069343 kubelet[3405]: E1105 15:03:34.068688 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.069343 kubelet[3405]: W1105 15:03:34.068732 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.069343 kubelet[3405]: E1105 15:03:34.068767 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.071079 kubelet[3405]: E1105 15:03:34.070044 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.071079 kubelet[3405]: W1105 15:03:34.070081 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.071079 kubelet[3405]: E1105 15:03:34.070115 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.071381 kubelet[3405]: E1105 15:03:34.071284 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.071381 kubelet[3405]: W1105 15:03:34.071312 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.071381 kubelet[3405]: E1105 15:03:34.071345 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.072643 kubelet[3405]: E1105 15:03:34.072590 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.072643 kubelet[3405]: W1105 15:03:34.072630 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.073060 kubelet[3405]: E1105 15:03:34.072664 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.073596 kubelet[3405]: E1105 15:03:34.073548 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.073596 kubelet[3405]: W1105 15:03:34.073589 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.073793 kubelet[3405]: E1105 15:03:34.073626 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.076193 kubelet[3405]: E1105 15:03:34.076111 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.076336 kubelet[3405]: W1105 15:03:34.076201 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.076336 kubelet[3405]: E1105 15:03:34.076240 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.077646 kubelet[3405]: E1105 15:03:34.077497 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.077646 kubelet[3405]: W1105 15:03:34.077527 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.077646 kubelet[3405]: E1105 15:03:34.077559 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.078301 kubelet[3405]: E1105 15:03:34.078257 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.078301 kubelet[3405]: W1105 15:03:34.078294 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.078557 kubelet[3405]: E1105 15:03:34.078327 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.079729 kubelet[3405]: E1105 15:03:34.079674 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.079729 kubelet[3405]: W1105 15:03:34.079716 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.079729 kubelet[3405]: E1105 15:03:34.079751 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.081504 kubelet[3405]: E1105 15:03:34.081451 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.081504 kubelet[3405]: W1105 15:03:34.081490 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.081504 kubelet[3405]: E1105 15:03:34.081525 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.129533 kubelet[3405]: E1105 15:03:34.129449 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.129533 kubelet[3405]: W1105 15:03:34.129489 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.129857 kubelet[3405]: E1105 15:03:34.129798 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.130718 kubelet[3405]: E1105 15:03:34.130658 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.130955 kubelet[3405]: W1105 15:03:34.130863 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.132235 kubelet[3405]: E1105 15:03:34.131196 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.132579 kubelet[3405]: E1105 15:03:34.132531 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.132579 kubelet[3405]: W1105 15:03:34.132573 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.132579 kubelet[3405]: E1105 15:03:34.132621 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.133050 kubelet[3405]: E1105 15:03:34.132972 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.133050 kubelet[3405]: W1105 15:03:34.132995 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.133050 kubelet[3405]: E1105 15:03:34.133034 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.133565 kubelet[3405]: E1105 15:03:34.133433 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.133565 kubelet[3405]: W1105 15:03:34.133455 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.133565 kubelet[3405]: E1105 15:03:34.133495 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.135535 kubelet[3405]: E1105 15:03:34.135477 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.135535 kubelet[3405]: W1105 15:03:34.135523 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.136385 kubelet[3405]: E1105 15:03:34.135602 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.138373 kubelet[3405]: E1105 15:03:34.138313 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.138373 kubelet[3405]: W1105 15:03:34.138358 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.138674 kubelet[3405]: E1105 15:03:34.138512 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.139184 kubelet[3405]: E1105 15:03:34.139106 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.139184 kubelet[3405]: W1105 15:03:34.139145 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.139488 kubelet[3405]: E1105 15:03:34.139256 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.139574 kubelet[3405]: E1105 15:03:34.139536 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.139574 kubelet[3405]: W1105 15:03:34.139559 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.139860 kubelet[3405]: E1105 15:03:34.139654 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.141521 kubelet[3405]: E1105 15:03:34.141464 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.141521 kubelet[3405]: W1105 15:03:34.141507 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.141839 kubelet[3405]: E1105 15:03:34.141560 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.143467 kubelet[3405]: E1105 15:03:34.143411 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.143467 kubelet[3405]: W1105 15:03:34.143454 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.143781 kubelet[3405]: E1105 15:03:34.143609 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.144145 kubelet[3405]: E1105 15:03:34.144098 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.145285 kubelet[3405]: W1105 15:03:34.144136 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.145507 kubelet[3405]: E1105 15:03:34.145323 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.145760 kubelet[3405]: E1105 15:03:34.145718 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.145760 kubelet[3405]: W1105 15:03:34.145754 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.146028 kubelet[3405]: E1105 15:03:34.145891 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.146379 kubelet[3405]: E1105 15:03:34.146324 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.146379 kubelet[3405]: W1105 15:03:34.146361 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.146600 kubelet[3405]: E1105 15:03:34.146407 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.148444 kubelet[3405]: E1105 15:03:34.148401 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.148726 kubelet[3405]: W1105 15:03:34.148611 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.148726 kubelet[3405]: E1105 15:03:34.148683 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.150399 kubelet[3405]: E1105 15:03:34.150318 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.150399 kubelet[3405]: W1105 15:03:34.150357 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.151076 kubelet[3405]: E1105 15:03:34.150514 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.152008 kubelet[3405]: E1105 15:03:34.151717 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.152008 kubelet[3405]: W1105 15:03:34.151749 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.152008 kubelet[3405]: E1105 15:03:34.151799 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:34.152673 kubelet[3405]: E1105 15:03:34.152638 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:34.152842 kubelet[3405]: W1105 15:03:34.152811 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:34.153012 kubelet[3405]: E1105 15:03:34.152957 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.079121 kubelet[3405]: I1105 15:03:35.078976 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-57899b4d6f-gl977" podStartSLOduration=2.770685394 podStartE2EDuration="6.07894789s" podCreationTimestamp="2025-11-05 15:03:29 +0000 UTC" firstStartedPulling="2025-11-05 15:03:30.370256503 +0000 UTC m=+36.914198117" lastFinishedPulling="2025-11-05 15:03:33.678518903 +0000 UTC m=+40.222460613" observedRunningTime="2025-11-05 15:03:34.092621769 +0000 UTC m=+40.636563419" watchObservedRunningTime="2025-11-05 15:03:35.07894789 +0000 UTC m=+41.622889528" Nov 5 15:03:35.090803 kubelet[3405]: E1105 15:03:35.090747 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.090803 kubelet[3405]: W1105 15:03:35.090803 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.091027 kubelet[3405]: E1105 15:03:35.090841 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.091459 kubelet[3405]: E1105 15:03:35.091414 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.091584 kubelet[3405]: W1105 15:03:35.091451 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.091584 kubelet[3405]: E1105 15:03:35.091533 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.092906 kubelet[3405]: E1105 15:03:35.092851 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.092906 kubelet[3405]: W1105 15:03:35.092894 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.092906 kubelet[3405]: E1105 15:03:35.092929 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.094563 kubelet[3405]: E1105 15:03:35.094506 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.094563 kubelet[3405]: W1105 15:03:35.094554 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.094800 kubelet[3405]: E1105 15:03:35.094590 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.096389 kubelet[3405]: E1105 15:03:35.096332 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.096542 kubelet[3405]: W1105 15:03:35.096491 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.096542 kubelet[3405]: E1105 15:03:35.096531 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.097484 kubelet[3405]: E1105 15:03:35.097428 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.097484 kubelet[3405]: W1105 15:03:35.097471 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.098254 kubelet[3405]: E1105 15:03:35.097508 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.098706 kubelet[3405]: E1105 15:03:35.098655 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.098706 kubelet[3405]: W1105 15:03:35.098696 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.098869 kubelet[3405]: E1105 15:03:35.098730 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.100188 kubelet[3405]: E1105 15:03:35.100123 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.101056 kubelet[3405]: W1105 15:03:35.100980 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.101056 kubelet[3405]: E1105 15:03:35.101044 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.101969 kubelet[3405]: E1105 15:03:35.101494 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.101969 kubelet[3405]: W1105 15:03:35.101520 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.101969 kubelet[3405]: E1105 15:03:35.101548 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.102864 kubelet[3405]: E1105 15:03:35.102807 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.102864 kubelet[3405]: W1105 15:03:35.102837 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.103046 kubelet[3405]: E1105 15:03:35.102869 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.104597 kubelet[3405]: E1105 15:03:35.104542 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.104597 kubelet[3405]: W1105 15:03:35.104585 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.104597 kubelet[3405]: E1105 15:03:35.104620 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.105500 kubelet[3405]: E1105 15:03:35.105447 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.105500 kubelet[3405]: W1105 15:03:35.105489 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.106258 kubelet[3405]: E1105 15:03:35.105523 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.107025 kubelet[3405]: E1105 15:03:35.106969 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.107025 kubelet[3405]: W1105 15:03:35.107014 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.107287 kubelet[3405]: E1105 15:03:35.107050 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.108385 kubelet[3405]: E1105 15:03:35.108330 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.108385 kubelet[3405]: W1105 15:03:35.108373 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.108385 kubelet[3405]: E1105 15:03:35.108407 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.109379 kubelet[3405]: E1105 15:03:35.109326 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.109379 kubelet[3405]: W1105 15:03:35.109368 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.109538 kubelet[3405]: E1105 15:03:35.109402 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.144685 kubelet[3405]: E1105 15:03:35.144637 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.144685 kubelet[3405]: W1105 15:03:35.144678 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.145021 kubelet[3405]: E1105 15:03:35.144713 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.145365 kubelet[3405]: E1105 15:03:35.145115 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.145365 kubelet[3405]: W1105 15:03:35.145170 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.145365 kubelet[3405]: E1105 15:03:35.145200 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.145866 kubelet[3405]: E1105 15:03:35.145591 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.145866 kubelet[3405]: W1105 15:03:35.145615 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.146099 kubelet[3405]: E1105 15:03:35.146064 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.146713 kubelet[3405]: E1105 15:03:35.146570 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.146835 kubelet[3405]: W1105 15:03:35.146720 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.146835 kubelet[3405]: E1105 15:03:35.146780 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.147284 kubelet[3405]: E1105 15:03:35.147245 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.147284 kubelet[3405]: W1105 15:03:35.147281 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.147488 kubelet[3405]: E1105 15:03:35.147325 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.147787 kubelet[3405]: E1105 15:03:35.147702 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.147906 kubelet[3405]: W1105 15:03:35.147788 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.148008 kubelet[3405]: E1105 15:03:35.147916 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.148377 kubelet[3405]: E1105 15:03:35.148338 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.148523 kubelet[3405]: W1105 15:03:35.148375 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.148593 kubelet[3405]: E1105 15:03:35.148539 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.149117 kubelet[3405]: E1105 15:03:35.149075 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.149117 kubelet[3405]: W1105 15:03:35.149113 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.149389 kubelet[3405]: E1105 15:03:35.149302 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.149835 kubelet[3405]: E1105 15:03:35.149783 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.149835 kubelet[3405]: W1105 15:03:35.149831 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.150196 kubelet[3405]: E1105 15:03:35.150089 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.150360 kubelet[3405]: E1105 15:03:35.150327 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.150437 kubelet[3405]: W1105 15:03:35.150358 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.150667 kubelet[3405]: E1105 15:03:35.150595 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.150944 kubelet[3405]: E1105 15:03:35.150891 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.151126 kubelet[3405]: W1105 15:03:35.150942 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.151126 kubelet[3405]: E1105 15:03:35.151010 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.151505 kubelet[3405]: E1105 15:03:35.151460 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.151505 kubelet[3405]: W1105 15:03:35.151502 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.151673 kubelet[3405]: E1105 15:03:35.151556 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.152326 kubelet[3405]: E1105 15:03:35.152288 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.153252 kubelet[3405]: W1105 15:03:35.152511 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.153252 kubelet[3405]: E1105 15:03:35.152578 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.153807 kubelet[3405]: E1105 15:03:35.153721 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.153807 kubelet[3405]: W1105 15:03:35.153765 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.154129 kubelet[3405]: E1105 15:03:35.154079 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.154827 kubelet[3405]: E1105 15:03:35.154768 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.155291 kubelet[3405]: W1105 15:03:35.154950 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.155291 kubelet[3405]: E1105 15:03:35.155218 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.155636 kubelet[3405]: E1105 15:03:35.155512 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.155636 kubelet[3405]: W1105 15:03:35.155566 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.155636 kubelet[3405]: E1105 15:03:35.155601 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.156222 kubelet[3405]: E1105 15:03:35.156145 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.156440 kubelet[3405]: W1105 15:03:35.156222 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.156440 kubelet[3405]: E1105 15:03:35.156256 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.157108 kubelet[3405]: E1105 15:03:35.157065 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:35.157108 kubelet[3405]: W1105 15:03:35.157104 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:35.157331 kubelet[3405]: E1105 15:03:35.157139 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:35.796897 kubelet[3405]: E1105 15:03:35.796764 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:36.116859 kubelet[3405]: E1105 15:03:36.116713 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.117850 kubelet[3405]: W1105 15:03:36.117238 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.117850 kubelet[3405]: E1105 15:03:36.117319 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.118635 kubelet[3405]: E1105 15:03:36.118510 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.118635 kubelet[3405]: W1105 15:03:36.118568 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.118968 kubelet[3405]: E1105 15:03:36.118603 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.119600 kubelet[3405]: E1105 15:03:36.119549 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.119600 kubelet[3405]: W1105 15:03:36.119595 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.119905 kubelet[3405]: E1105 15:03:36.119633 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.120094 kubelet[3405]: E1105 15:03:36.120055 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.120334 kubelet[3405]: W1105 15:03:36.120093 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.120334 kubelet[3405]: E1105 15:03:36.120127 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.120636 kubelet[3405]: E1105 15:03:36.120560 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.120636 kubelet[3405]: W1105 15:03:36.120586 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.120636 kubelet[3405]: E1105 15:03:36.120615 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.120989 kubelet[3405]: E1105 15:03:36.120955 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.120989 kubelet[3405]: W1105 15:03:36.120977 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.121227 kubelet[3405]: E1105 15:03:36.121003 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.121437 kubelet[3405]: E1105 15:03:36.121396 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.121437 kubelet[3405]: W1105 15:03:36.121431 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.121600 kubelet[3405]: E1105 15:03:36.121464 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.121882 kubelet[3405]: E1105 15:03:36.121846 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.121972 kubelet[3405]: W1105 15:03:36.121882 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.121972 kubelet[3405]: E1105 15:03:36.121912 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.122375 kubelet[3405]: E1105 15:03:36.122337 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.122375 kubelet[3405]: W1105 15:03:36.122373 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.122601 kubelet[3405]: E1105 15:03:36.122406 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.122786 kubelet[3405]: E1105 15:03:36.122754 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.122859 kubelet[3405]: W1105 15:03:36.122785 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.122859 kubelet[3405]: E1105 15:03:36.122818 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.123228 kubelet[3405]: E1105 15:03:36.123182 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.123331 kubelet[3405]: W1105 15:03:36.123229 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.123331 kubelet[3405]: E1105 15:03:36.123257 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.123659 kubelet[3405]: E1105 15:03:36.123621 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.123659 kubelet[3405]: W1105 15:03:36.123657 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.123839 kubelet[3405]: E1105 15:03:36.123689 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.124141 kubelet[3405]: E1105 15:03:36.124100 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.124296 kubelet[3405]: W1105 15:03:36.124138 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.124296 kubelet[3405]: E1105 15:03:36.124262 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.124685 kubelet[3405]: E1105 15:03:36.124646 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.124685 kubelet[3405]: W1105 15:03:36.124683 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.124861 kubelet[3405]: E1105 15:03:36.124716 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.125142 kubelet[3405]: E1105 15:03:36.125103 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.125268 kubelet[3405]: W1105 15:03:36.125139 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.125268 kubelet[3405]: E1105 15:03:36.125222 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.154796 kubelet[3405]: E1105 15:03:36.154622 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.154796 kubelet[3405]: W1105 15:03:36.154662 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.154796 kubelet[3405]: E1105 15:03:36.154696 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.155306 kubelet[3405]: E1105 15:03:36.155259 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.155306 kubelet[3405]: W1105 15:03:36.155302 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.155946 kubelet[3405]: E1105 15:03:36.155427 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.156122 kubelet[3405]: E1105 15:03:36.156087 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.156308 kubelet[3405]: W1105 15:03:36.156277 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.156456 kubelet[3405]: E1105 15:03:36.156427 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.157016 kubelet[3405]: E1105 15:03:36.156970 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.157016 kubelet[3405]: W1105 15:03:36.157012 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.157368 kubelet[3405]: E1105 15:03:36.157064 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.157645 kubelet[3405]: E1105 15:03:36.157606 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.157768 kubelet[3405]: W1105 15:03:36.157644 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.157966 kubelet[3405]: E1105 15:03:36.157827 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.158075 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.160199 kubelet[3405]: W1105 15:03:36.158245 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.158667 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.160199 kubelet[3405]: W1105 15:03:36.158694 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.159135 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.160199 kubelet[3405]: W1105 15:03:36.159202 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.159324 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.159717 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.160199 kubelet[3405]: W1105 15:03:36.159744 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.160199 kubelet[3405]: E1105 15:03:36.159774 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.160800 kubelet[3405]: E1105 15:03:36.160070 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.160800 kubelet[3405]: E1105 15:03:36.160133 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.160800 kubelet[3405]: E1105 15:03:36.160266 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.160800 kubelet[3405]: W1105 15:03:36.160290 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.160800 kubelet[3405]: E1105 15:03:36.160321 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.161694 kubelet[3405]: E1105 15:03:36.161654 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.161886 kubelet[3405]: W1105 15:03:36.161854 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.162040 kubelet[3405]: E1105 15:03:36.162013 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.162840 kubelet[3405]: E1105 15:03:36.162631 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.162840 kubelet[3405]: W1105 15:03:36.162667 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.162840 kubelet[3405]: E1105 15:03:36.162716 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.163656 kubelet[3405]: E1105 15:03:36.163616 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.164009 kubelet[3405]: W1105 15:03:36.163829 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.164009 kubelet[3405]: E1105 15:03:36.163896 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.164660 kubelet[3405]: E1105 15:03:36.164623 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.165195 kubelet[3405]: W1105 15:03:36.164826 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.165630 kubelet[3405]: E1105 15:03:36.165591 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.165840 kubelet[3405]: W1105 15:03:36.165804 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.165992 kubelet[3405]: E1105 15:03:36.165964 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.166869 kubelet[3405]: E1105 15:03:36.166825 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.167271 kubelet[3405]: W1105 15:03:36.167051 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.167271 kubelet[3405]: E1105 15:03:36.167097 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.167742 kubelet[3405]: E1105 15:03:36.167460 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.168238 kubelet[3405]: E1105 15:03:36.168198 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.169189 kubelet[3405]: W1105 15:03:36.168418 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.169189 kubelet[3405]: E1105 15:03:36.168468 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.170388 kubelet[3405]: E1105 15:03:36.170347 3405 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:03:36.170605 kubelet[3405]: W1105 15:03:36.170573 3405 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:03:36.170745 kubelet[3405]: E1105 15:03:36.170719 3405 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:03:36.850664 containerd[1976]: time="2025-11-05T15:03:36.850588539Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:36.852271 containerd[1976]: time="2025-11-05T15:03:36.852219087Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 15:03:36.853240 containerd[1976]: time="2025-11-05T15:03:36.853122123Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:36.859145 containerd[1976]: time="2025-11-05T15:03:36.858224007Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:36.860061 containerd[1976]: time="2025-11-05T15:03:36.859963923Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 3.180013756s" Nov 5 15:03:36.860061 containerd[1976]: time="2025-11-05T15:03:36.860046291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 15:03:36.868717 containerd[1976]: time="2025-11-05T15:03:36.868662699Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:03:36.894506 containerd[1976]: time="2025-11-05T15:03:36.894445383Z" level=info msg="Container 4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:36.911583 containerd[1976]: time="2025-11-05T15:03:36.911466027Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\"" Nov 5 15:03:36.912769 containerd[1976]: time="2025-11-05T15:03:36.912622527Z" level=info msg="StartContainer for \"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\"" Nov 5 15:03:36.921093 containerd[1976]: time="2025-11-05T15:03:36.921036891Z" level=info msg="connecting to shim 4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020" address="unix:///run/containerd/s/6876a64f5cef3e9e5d924963852bfd8e91d5a8210fa92dd8b2e21eee7cfa9dd5" protocol=ttrpc version=3 Nov 5 15:03:36.970754 systemd[1]: Started cri-containerd-4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020.scope - libcontainer container 4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020. Nov 5 15:03:37.090589 containerd[1976]: time="2025-11-05T15:03:37.090490092Z" level=info msg="StartContainer for \"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\" returns successfully" Nov 5 15:03:37.118354 systemd[1]: cri-containerd-4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020.scope: Deactivated successfully. Nov 5 15:03:37.128234 containerd[1976]: time="2025-11-05T15:03:37.128169420Z" level=info msg="received exit event container_id:\"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\" id:\"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\" pid:4120 exited_at:{seconds:1762355017 nanos:127287708}" Nov 5 15:03:37.130888 containerd[1976]: time="2025-11-05T15:03:37.130824348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\" id:\"4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020\" pid:4120 exited_at:{seconds:1762355017 nanos:127287708}" Nov 5 15:03:37.177194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d4eca72c7d3bfbe31a72a144bbfed19b3a34bb3615513528f944d40ef467020-rootfs.mount: Deactivated successfully. Nov 5 15:03:37.796924 kubelet[3405]: E1105 15:03:37.796866 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:38.078121 containerd[1976]: time="2025-11-05T15:03:38.077858701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:03:39.797609 kubelet[3405]: E1105 15:03:39.797546 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:41.758612 containerd[1976]: time="2025-11-05T15:03:41.758556979Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:41.760378 containerd[1976]: time="2025-11-05T15:03:41.760325239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 15:03:41.760733 containerd[1976]: time="2025-11-05T15:03:41.760698559Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:41.764200 containerd[1976]: time="2025-11-05T15:03:41.763717159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:41.765178 containerd[1976]: time="2025-11-05T15:03:41.765056059Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.687075126s" Nov 5 15:03:41.765178 containerd[1976]: time="2025-11-05T15:03:41.765111703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 15:03:41.771507 containerd[1976]: time="2025-11-05T15:03:41.771453487Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:03:41.792743 containerd[1976]: time="2025-11-05T15:03:41.792693092Z" level=info msg="Container 970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:41.800823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount697453427.mount: Deactivated successfully. Nov 5 15:03:41.809414 kubelet[3405]: E1105 15:03:41.809111 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:41.826572 containerd[1976]: time="2025-11-05T15:03:41.826495652Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\"" Nov 5 15:03:41.828053 containerd[1976]: time="2025-11-05T15:03:41.827967332Z" level=info msg="StartContainer for \"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\"" Nov 5 15:03:41.832832 containerd[1976]: time="2025-11-05T15:03:41.832743476Z" level=info msg="connecting to shim 970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac" address="unix:///run/containerd/s/6876a64f5cef3e9e5d924963852bfd8e91d5a8210fa92dd8b2e21eee7cfa9dd5" protocol=ttrpc version=3 Nov 5 15:03:41.875541 systemd[1]: Started cri-containerd-970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac.scope - libcontainer container 970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac. Nov 5 15:03:41.966923 containerd[1976]: time="2025-11-05T15:03:41.966748508Z" level=info msg="StartContainer for \"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\" returns successfully" Nov 5 15:03:42.984955 containerd[1976]: time="2025-11-05T15:03:42.984571221Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:03:42.989664 systemd[1]: cri-containerd-970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac.scope: Deactivated successfully. Nov 5 15:03:42.990249 systemd[1]: cri-containerd-970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac.scope: Consumed 963ms CPU time, 186.7M memory peak, 165.9M written to disk. Nov 5 15:03:42.995124 containerd[1976]: time="2025-11-05T15:03:42.995057470Z" level=info msg="received exit event container_id:\"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\" id:\"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\" pid:4180 exited_at:{seconds:1762355022 nanos:994293670}" Nov 5 15:03:42.995987 containerd[1976]: time="2025-11-05T15:03:42.995807506Z" level=info msg="TaskExit event in podsandbox handler container_id:\"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\" id:\"970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac\" pid:4180 exited_at:{seconds:1762355022 nanos:994293670}" Nov 5 15:03:43.040876 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-970d34288a61c25a7469de7b35725f199631823537e75994f3b4449167d047ac-rootfs.mount: Deactivated successfully. Nov 5 15:03:43.088798 kubelet[3405]: I1105 15:03:43.088633 3405 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:03:43.161107 systemd[1]: Created slice kubepods-burstable-pod1b5f9275_60ba_4e84_a340_fb8945d27281.slice - libcontainer container kubepods-burstable-pod1b5f9275_60ba_4e84_a340_fb8945d27281.slice. Nov 5 15:03:43.191262 systemd[1]: Created slice kubepods-burstable-pode0f7c878_15e6_4e8b_a9be_3e225dd7e262.slice - libcontainer container kubepods-burstable-pode0f7c878_15e6_4e8b_a9be_3e225dd7e262.slice. Nov 5 15:03:43.220434 kubelet[3405]: I1105 15:03:43.220299 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9w7t\" (UniqueName: \"kubernetes.io/projected/e0f7c878-15e6-4e8b-a9be-3e225dd7e262-kube-api-access-q9w7t\") pod \"coredns-668d6bf9bc-dcwpj\" (UID: \"e0f7c878-15e6-4e8b-a9be-3e225dd7e262\") " pod="kube-system/coredns-668d6bf9bc-dcwpj" Nov 5 15:03:43.228287 kubelet[3405]: I1105 15:03:43.228180 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10c8b7bb-d826-4668-8911-b97ba8246d4b-tigera-ca-bundle\") pod \"calico-kube-controllers-784689597c-cgw5w\" (UID: \"10c8b7bb-d826-4668-8911-b97ba8246d4b\") " pod="calico-system/calico-kube-controllers-784689597c-cgw5w" Nov 5 15:03:43.230293 kubelet[3405]: I1105 15:03:43.230001 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqssh\" (UniqueName: \"kubernetes.io/projected/10c8b7bb-d826-4668-8911-b97ba8246d4b-kube-api-access-sqssh\") pod \"calico-kube-controllers-784689597c-cgw5w\" (UID: \"10c8b7bb-d826-4668-8911-b97ba8246d4b\") " pod="calico-system/calico-kube-controllers-784689597c-cgw5w" Nov 5 15:03:43.233489 kubelet[3405]: I1105 15:03:43.233426 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1b5f9275-60ba-4e84-a340-fb8945d27281-config-volume\") pod \"coredns-668d6bf9bc-fpblv\" (UID: \"1b5f9275-60ba-4e84-a340-fb8945d27281\") " pod="kube-system/coredns-668d6bf9bc-fpblv" Nov 5 15:03:43.233631 kubelet[3405]: I1105 15:03:43.233562 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e0f7c878-15e6-4e8b-a9be-3e225dd7e262-config-volume\") pod \"coredns-668d6bf9bc-dcwpj\" (UID: \"e0f7c878-15e6-4e8b-a9be-3e225dd7e262\") " pod="kube-system/coredns-668d6bf9bc-dcwpj" Nov 5 15:03:43.233631 kubelet[3405]: I1105 15:03:43.233605 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sxtsp\" (UniqueName: \"kubernetes.io/projected/1b5f9275-60ba-4e84-a340-fb8945d27281-kube-api-access-sxtsp\") pod \"coredns-668d6bf9bc-fpblv\" (UID: \"1b5f9275-60ba-4e84-a340-fb8945d27281\") " pod="kube-system/coredns-668d6bf9bc-fpblv" Nov 5 15:03:43.246736 systemd[1]: Created slice kubepods-besteffort-pod10c8b7bb_d826_4668_8911_b97ba8246d4b.slice - libcontainer container kubepods-besteffort-pod10c8b7bb_d826_4668_8911_b97ba8246d4b.slice. Nov 5 15:03:43.271429 systemd[1]: Created slice kubepods-besteffort-poda23cc4fd_4004_43fd_a3bd_e3b7c8798f11.slice - libcontainer container kubepods-besteffort-poda23cc4fd_4004_43fd_a3bd_e3b7c8798f11.slice. Nov 5 15:03:43.289086 systemd[1]: Created slice kubepods-besteffort-pod8ec6fd17_e646_449e_8324_b2210e743bb4.slice - libcontainer container kubepods-besteffort-pod8ec6fd17_e646_449e_8324_b2210e743bb4.slice. Nov 5 15:03:43.316300 systemd[1]: Created slice kubepods-besteffort-pod7ef28263_0ce9_4955_869b_6ae38808f23b.slice - libcontainer container kubepods-besteffort-pod7ef28263_0ce9_4955_869b_6ae38808f23b.slice. Nov 5 15:03:43.332948 systemd[1]: Created slice kubepods-besteffort-pod8d9369e7_33c5_42a2_b295_6e6f5445630e.slice - libcontainer container kubepods-besteffort-pod8d9369e7_33c5_42a2_b295_6e6f5445630e.slice. Nov 5 15:03:43.335852 kubelet[3405]: I1105 15:03:43.335758 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8ec6fd17-e646-449e-8324-b2210e743bb4-calico-apiserver-certs\") pod \"calico-apiserver-7bb8dc7b97-bnfj9\" (UID: \"8ec6fd17-e646-449e-8324-b2210e743bb4\") " pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" Nov 5 15:03:43.336697 kubelet[3405]: I1105 15:03:43.336351 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7ef28263-0ce9-4955-869b-6ae38808f23b-goldmane-key-pair\") pod \"goldmane-666569f655-jcc47\" (UID: \"7ef28263-0ce9-4955-869b-6ae38808f23b\") " pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:43.336861 kubelet[3405]: I1105 15:03:43.336674 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8d9369e7-33c5-42a2-b295-6e6f5445630e-calico-apiserver-certs\") pod \"calico-apiserver-7bb8dc7b97-7jq9b\" (UID: \"8d9369e7-33c5-42a2-b295-6e6f5445630e\") " pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" Nov 5 15:03:43.337059 kubelet[3405]: I1105 15:03:43.336970 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjfq\" (UniqueName: \"kubernetes.io/projected/7ef28263-0ce9-4955-869b-6ae38808f23b-kube-api-access-jvjfq\") pod \"goldmane-666569f655-jcc47\" (UID: \"7ef28263-0ce9-4955-869b-6ae38808f23b\") " pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:43.337247 kubelet[3405]: I1105 15:03:43.337220 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llq2s\" (UniqueName: \"kubernetes.io/projected/8d9369e7-33c5-42a2-b295-6e6f5445630e-kube-api-access-llq2s\") pod \"calico-apiserver-7bb8dc7b97-7jq9b\" (UID: \"8d9369e7-33c5-42a2-b295-6e6f5445630e\") " pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" Nov 5 15:03:43.337999 kubelet[3405]: I1105 15:03:43.337949 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-ca-bundle\") pod \"whisker-978f779c4-89j44\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " pod="calico-system/whisker-978f779c4-89j44" Nov 5 15:03:43.338102 kubelet[3405]: I1105 15:03:43.338016 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwxrv\" (UniqueName: \"kubernetes.io/projected/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-kube-api-access-pwxrv\") pod \"whisker-978f779c4-89j44\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " pod="calico-system/whisker-978f779c4-89j44" Nov 5 15:03:43.338102 kubelet[3405]: I1105 15:03:43.338095 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-backend-key-pair\") pod \"whisker-978f779c4-89j44\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " pod="calico-system/whisker-978f779c4-89j44" Nov 5 15:03:43.342232 kubelet[3405]: I1105 15:03:43.338141 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jg9b\" (UniqueName: \"kubernetes.io/projected/8ec6fd17-e646-449e-8324-b2210e743bb4-kube-api-access-4jg9b\") pod \"calico-apiserver-7bb8dc7b97-bnfj9\" (UID: \"8ec6fd17-e646-449e-8324-b2210e743bb4\") " pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" Nov 5 15:03:43.342399 kubelet[3405]: I1105 15:03:43.342283 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ef28263-0ce9-4955-869b-6ae38808f23b-config\") pod \"goldmane-666569f655-jcc47\" (UID: \"7ef28263-0ce9-4955-869b-6ae38808f23b\") " pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:43.342399 kubelet[3405]: I1105 15:03:43.342341 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ef28263-0ce9-4955-869b-6ae38808f23b-goldmane-ca-bundle\") pod \"goldmane-666569f655-jcc47\" (UID: \"7ef28263-0ce9-4955-869b-6ae38808f23b\") " pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:43.487175 containerd[1976]: time="2025-11-05T15:03:43.483445856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fpblv,Uid:1b5f9275-60ba-4e84-a340-fb8945d27281,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:43.531315 containerd[1976]: time="2025-11-05T15:03:43.531132356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dcwpj,Uid:e0f7c878-15e6-4e8b-a9be-3e225dd7e262,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:43.575877 containerd[1976]: time="2025-11-05T15:03:43.575681048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-784689597c-cgw5w,Uid:10c8b7bb-d826-4668-8911-b97ba8246d4b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:43.583959 containerd[1976]: time="2025-11-05T15:03:43.583848812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-978f779c4-89j44,Uid:a23cc4fd-4004-43fd-a3bd-e3b7c8798f11,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:43.613397 containerd[1976]: time="2025-11-05T15:03:43.611655345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-bnfj9,Uid:8ec6fd17-e646-449e-8324-b2210e743bb4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:03:43.638865 containerd[1976]: time="2025-11-05T15:03:43.638387373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jcc47,Uid:7ef28263-0ce9-4955-869b-6ae38808f23b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:43.657640 containerd[1976]: time="2025-11-05T15:03:43.657570249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-7jq9b,Uid:8d9369e7-33c5-42a2-b295-6e6f5445630e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:03:43.818490 systemd[1]: Created slice kubepods-besteffort-podef9a0063_5427_4eaf_b6d6_01cd9334db4b.slice - libcontainer container kubepods-besteffort-podef9a0063_5427_4eaf_b6d6_01cd9334db4b.slice. Nov 5 15:03:43.830616 containerd[1976]: time="2025-11-05T15:03:43.830545114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qgwk8,Uid:ef9a0063-5427-4eaf-b6d6-01cd9334db4b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:44.060568 containerd[1976]: time="2025-11-05T15:03:44.059494615Z" level=error msg="Failed to destroy network for sandbox \"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.072483 containerd[1976]: time="2025-11-05T15:03:44.072333559Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-978f779c4-89j44,Uid:a23cc4fd-4004-43fd-a3bd-e3b7c8798f11,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.074112 kubelet[3405]: E1105 15:03:44.073206 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.074112 kubelet[3405]: E1105 15:03:44.073305 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-978f779c4-89j44" Nov 5 15:03:44.074112 kubelet[3405]: E1105 15:03:44.073339 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-978f779c4-89j44" Nov 5 15:03:44.074403 kubelet[3405]: E1105 15:03:44.073405 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-978f779c4-89j44_calico-system(a23cc4fd-4004-43fd-a3bd-e3b7c8798f11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-978f779c4-89j44_calico-system(a23cc4fd-4004-43fd-a3bd-e3b7c8798f11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2247c048b632623554bb3e5b6775b42923388bf25f767a3a66966cd4ab22ccc2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-978f779c4-89j44" podUID="a23cc4fd-4004-43fd-a3bd-e3b7c8798f11" Nov 5 15:03:44.113567 systemd[1]: run-netns-cni\x2dbbf294a1\x2d9f97\x2dee24\x2da7ee\x2d7ebfca06cd68.mount: Deactivated successfully. Nov 5 15:03:44.184925 containerd[1976]: time="2025-11-05T15:03:44.183294871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:03:44.194273 containerd[1976]: time="2025-11-05T15:03:44.193409695Z" level=error msg="Failed to destroy network for sandbox \"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.198450 containerd[1976]: time="2025-11-05T15:03:44.198380719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dcwpj,Uid:e0f7c878-15e6-4e8b-a9be-3e225dd7e262,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.201177 kubelet[3405]: E1105 15:03:44.200616 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.202818 kubelet[3405]: E1105 15:03:44.202717 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dcwpj" Nov 5 15:03:44.203055 systemd[1]: run-netns-cni\x2d72e5ccd5\x2dd569\x2debaa\x2dfe39\x2dda2351a009ad.mount: Deactivated successfully. Nov 5 15:03:44.203829 kubelet[3405]: E1105 15:03:44.203116 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dcwpj" Nov 5 15:03:44.204910 kubelet[3405]: E1105 15:03:44.204300 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dcwpj_kube-system(e0f7c878-15e6-4e8b-a9be-3e225dd7e262)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dcwpj_kube-system(e0f7c878-15e6-4e8b-a9be-3e225dd7e262)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10917c506fe82c715a20ae663da5ec54399fa79dc3a86050d344d8e94037790d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dcwpj" podUID="e0f7c878-15e6-4e8b-a9be-3e225dd7e262" Nov 5 15:03:44.259121 containerd[1976]: time="2025-11-05T15:03:44.258656072Z" level=error msg="Failed to destroy network for sandbox \"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.269951 systemd[1]: run-netns-cni\x2da71896c0\x2dddba\x2df951\x2d9bb0\x2df5940b342217.mount: Deactivated successfully. Nov 5 15:03:44.279811 containerd[1976]: time="2025-11-05T15:03:44.279240116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fpblv,Uid:1b5f9275-60ba-4e84-a340-fb8945d27281,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.282759 kubelet[3405]: E1105 15:03:44.282657 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.283233 kubelet[3405]: E1105 15:03:44.282772 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fpblv" Nov 5 15:03:44.283347 kubelet[3405]: E1105 15:03:44.283240 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-fpblv" Nov 5 15:03:44.283504 kubelet[3405]: E1105 15:03:44.283421 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-fpblv_kube-system(1b5f9275-60ba-4e84-a340-fb8945d27281)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-fpblv_kube-system(1b5f9275-60ba-4e84-a340-fb8945d27281)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e05acc593ec090e9a685c659aae323aa12f63702b4bdf370628bb10edc6a14cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-fpblv" podUID="1b5f9275-60ba-4e84-a340-fb8945d27281" Nov 5 15:03:44.288136 containerd[1976]: time="2025-11-05T15:03:44.287495576Z" level=error msg="Failed to destroy network for sandbox \"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.292593 containerd[1976]: time="2025-11-05T15:03:44.292533056Z" level=error msg="Failed to destroy network for sandbox \"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.295196 containerd[1976]: time="2025-11-05T15:03:44.294039824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-784689597c-cgw5w,Uid:10c8b7bb-d826-4668-8911-b97ba8246d4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.297306 kubelet[3405]: E1105 15:03:44.297179 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.298887 kubelet[3405]: E1105 15:03:44.297549 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" Nov 5 15:03:44.298887 kubelet[3405]: E1105 15:03:44.297595 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" Nov 5 15:03:44.298887 kubelet[3405]: E1105 15:03:44.298751 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d40a1764080465a3d7349f0188a40b374ec15b6521b7af1f70be881054519229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:03:44.300090 systemd[1]: run-netns-cni\x2da4c47adb\x2d9afd\x2d5337\x2d953b\x2d7a8770361aef.mount: Deactivated successfully. Nov 5 15:03:44.306759 containerd[1976]: time="2025-11-05T15:03:44.306517436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-7jq9b,Uid:8d9369e7-33c5-42a2-b295-6e6f5445630e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.313443 kubelet[3405]: E1105 15:03:44.313385 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.313689 kubelet[3405]: E1105 15:03:44.313657 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" Nov 5 15:03:44.313818 kubelet[3405]: E1105 15:03:44.313785 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" Nov 5 15:03:44.313993 kubelet[3405]: E1105 15:03:44.313954 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7f92aa3b7b3f7e41b79a33289a16fa727291c879a6b5d9d548935cd84b5b80d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:03:44.316272 containerd[1976]: time="2025-11-05T15:03:44.314797304Z" level=error msg="Failed to destroy network for sandbox \"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.319625 containerd[1976]: time="2025-11-05T15:03:44.318530036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-bnfj9,Uid:8ec6fd17-e646-449e-8324-b2210e743bb4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.321077 containerd[1976]: time="2025-11-05T15:03:44.320905340Z" level=error msg="Failed to destroy network for sandbox \"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.322185 kubelet[3405]: E1105 15:03:44.321917 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.326296 kubelet[3405]: E1105 15:03:44.325260 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" Nov 5 15:03:44.326296 kubelet[3405]: E1105 15:03:44.325447 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" Nov 5 15:03:44.326296 kubelet[3405]: E1105 15:03:44.325511 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e47db7f11586eb627a5b4bd41ebbfbf336182ef8a840e28a5e3e3bf129574a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:03:44.329860 containerd[1976]: time="2025-11-05T15:03:44.329278040Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jcc47,Uid:7ef28263-0ce9-4955-869b-6ae38808f23b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.331305 kubelet[3405]: E1105 15:03:44.329577 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.331305 kubelet[3405]: E1105 15:03:44.329646 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:44.331305 kubelet[3405]: E1105 15:03:44.329716 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-jcc47" Nov 5 15:03:44.331505 kubelet[3405]: E1105 15:03:44.329778 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8567742ad7e60c4323ddc26ff48c88034ccbc07801d8ffa16d67a8448ab64762\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:03:44.382194 containerd[1976]: time="2025-11-05T15:03:44.381681380Z" level=error msg="Failed to destroy network for sandbox \"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.384608 containerd[1976]: time="2025-11-05T15:03:44.384297848Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qgwk8,Uid:ef9a0063-5427-4eaf-b6d6-01cd9334db4b,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.384777 kubelet[3405]: E1105 15:03:44.384643 3405 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:03:44.384777 kubelet[3405]: E1105 15:03:44.384717 3405 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:44.384777 kubelet[3405]: E1105 15:03:44.384757 3405 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-qgwk8" Nov 5 15:03:44.384981 kubelet[3405]: E1105 15:03:44.384819 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60666b9a9580e39d14685e316012184a11e39173be7d2e68c37824a0e301fae4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:03:45.039261 systemd[1]: run-netns-cni\x2d82cf75ad\x2d188d\x2d9a63\x2d4ccd\x2dcbf835a3943e.mount: Deactivated successfully. Nov 5 15:03:45.039481 systemd[1]: run-netns-cni\x2d66922563\x2dbee7\x2de492\x2d28ad\x2d5408f1d8a78a.mount: Deactivated successfully. Nov 5 15:03:45.039607 systemd[1]: run-netns-cni\x2d10f152ac\x2d717a\x2d91c8\x2daf33\x2dfc2ea43ee64f.mount: Deactivated successfully. Nov 5 15:03:45.039723 systemd[1]: run-netns-cni\x2ddd4c4a37\x2def25\x2d9ccd\x2dc0c1\x2db3f1520698b5.mount: Deactivated successfully. Nov 5 15:03:52.657931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808798150.mount: Deactivated successfully. Nov 5 15:03:52.732906 containerd[1976]: time="2025-11-05T15:03:52.732782862Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:52.734941 containerd[1976]: time="2025-11-05T15:03:52.734779182Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 15:03:52.737222 containerd[1976]: time="2025-11-05T15:03:52.736976490Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:52.743567 containerd[1976]: time="2025-11-05T15:03:52.743464770Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:52.745183 containerd[1976]: time="2025-11-05T15:03:52.744779682Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 8.561400823s" Nov 5 15:03:52.745183 containerd[1976]: time="2025-11-05T15:03:52.744844026Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 15:03:52.777224 containerd[1976]: time="2025-11-05T15:03:52.776725218Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:03:52.839177 containerd[1976]: time="2025-11-05T15:03:52.836491818Z" level=info msg="Container 5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:52.846986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2918169478.mount: Deactivated successfully. Nov 5 15:03:52.862408 containerd[1976]: time="2025-11-05T15:03:52.862346899Z" level=info msg="CreateContainer within sandbox \"148130628b32983aca0ecdbb9b22f81fa24401dd472bb594ca1dabfb10fe54e6\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\"" Nov 5 15:03:52.864313 containerd[1976]: time="2025-11-05T15:03:52.863574283Z" level=info msg="StartContainer for \"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\"" Nov 5 15:03:52.867315 containerd[1976]: time="2025-11-05T15:03:52.867261607Z" level=info msg="connecting to shim 5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb" address="unix:///run/containerd/s/6876a64f5cef3e9e5d924963852bfd8e91d5a8210fa92dd8b2e21eee7cfa9dd5" protocol=ttrpc version=3 Nov 5 15:03:52.952979 systemd[1]: Started cri-containerd-5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb.scope - libcontainer container 5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb. Nov 5 15:03:53.056514 containerd[1976]: time="2025-11-05T15:03:53.056428491Z" level=info msg="StartContainer for \"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" returns successfully" Nov 5 15:03:53.273556 kubelet[3405]: I1105 15:03:53.273075 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ncrjk" podStartSLOduration=2.180627763 podStartE2EDuration="24.272973665s" podCreationTimestamp="2025-11-05 15:03:29 +0000 UTC" firstStartedPulling="2025-11-05 15:03:30.654279236 +0000 UTC m=+37.198220874" lastFinishedPulling="2025-11-05 15:03:52.74662515 +0000 UTC m=+59.290566776" observedRunningTime="2025-11-05 15:03:53.268934621 +0000 UTC m=+59.812876259" watchObservedRunningTime="2025-11-05 15:03:53.272973665 +0000 UTC m=+59.816915291" Nov 5 15:03:53.479269 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:03:53.479422 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:03:53.492338 containerd[1976]: time="2025-11-05T15:03:53.492122802Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" id:\"6fde662f982d858a04f37819434c2ebd74f046547d8a0e785a1594acfdf8511c\" pid:4483 exit_status:1 exited_at:{seconds:1762355033 nanos:491247618}" Nov 5 15:03:53.825482 kubelet[3405]: I1105 15:03:53.825422 3405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-ca-bundle\") pod \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " Nov 5 15:03:53.825684 kubelet[3405]: I1105 15:03:53.825551 3405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwxrv\" (UniqueName: \"kubernetes.io/projected/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-kube-api-access-pwxrv\") pod \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " Nov 5 15:03:53.825684 kubelet[3405]: I1105 15:03:53.825594 3405 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-backend-key-pair\") pod \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\" (UID: \"a23cc4fd-4004-43fd-a3bd-e3b7c8798f11\") " Nov 5 15:03:53.830188 kubelet[3405]: I1105 15:03:53.829543 3405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11" (UID: "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:03:53.844326 systemd[1]: var-lib-kubelet-pods-a23cc4fd\x2d4004\x2d43fd\x2da3bd\x2de3b7c8798f11-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:03:53.846405 kubelet[3405]: I1105 15:03:53.845781 3405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11" (UID: "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:03:53.852167 kubelet[3405]: I1105 15:03:53.851272 3405 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-kube-api-access-pwxrv" (OuterVolumeSpecName: "kube-api-access-pwxrv") pod "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11" (UID: "a23cc4fd-4004-43fd-a3bd-e3b7c8798f11"). InnerVolumeSpecName "kube-api-access-pwxrv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:03:53.853512 systemd[1]: var-lib-kubelet-pods-a23cc4fd\x2d4004\x2d43fd\x2da3bd\x2de3b7c8798f11-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpwxrv.mount: Deactivated successfully. Nov 5 15:03:53.926468 kubelet[3405]: I1105 15:03:53.926406 3405 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwxrv\" (UniqueName: \"kubernetes.io/projected/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-kube-api-access-pwxrv\") on node \"ip-172-31-21-83\" DevicePath \"\"" Nov 5 15:03:53.926468 kubelet[3405]: I1105 15:03:53.926463 3405 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-backend-key-pair\") on node \"ip-172-31-21-83\" DevicePath \"\"" Nov 5 15:03:53.926676 kubelet[3405]: I1105 15:03:53.926491 3405 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11-whisker-ca-bundle\") on node \"ip-172-31-21-83\" DevicePath \"\"" Nov 5 15:03:54.251214 systemd[1]: Removed slice kubepods-besteffort-poda23cc4fd_4004_43fd_a3bd_e3b7c8798f11.slice - libcontainer container kubepods-besteffort-poda23cc4fd_4004_43fd_a3bd_e3b7c8798f11.slice. Nov 5 15:03:54.371055 systemd[1]: Created slice kubepods-besteffort-podc36dc63e_c060_4c77_b41d_1b4d1b676e6a.slice - libcontainer container kubepods-besteffort-podc36dc63e_c060_4c77_b41d_1b4d1b676e6a.slice. Nov 5 15:03:54.432766 kubelet[3405]: I1105 15:03:54.432322 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c36dc63e-c060-4c77-b41d-1b4d1b676e6a-whisker-backend-key-pair\") pod \"whisker-5895d64bd-fm899\" (UID: \"c36dc63e-c060-4c77-b41d-1b4d1b676e6a\") " pod="calico-system/whisker-5895d64bd-fm899" Nov 5 15:03:54.436213 kubelet[3405]: I1105 15:03:54.435340 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c36dc63e-c060-4c77-b41d-1b4d1b676e6a-whisker-ca-bundle\") pod \"whisker-5895d64bd-fm899\" (UID: \"c36dc63e-c060-4c77-b41d-1b4d1b676e6a\") " pod="calico-system/whisker-5895d64bd-fm899" Nov 5 15:03:54.436213 kubelet[3405]: I1105 15:03:54.435423 3405 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fw6st\" (UniqueName: \"kubernetes.io/projected/c36dc63e-c060-4c77-b41d-1b4d1b676e6a-kube-api-access-fw6st\") pod \"whisker-5895d64bd-fm899\" (UID: \"c36dc63e-c060-4c77-b41d-1b4d1b676e6a\") " pod="calico-system/whisker-5895d64bd-fm899" Nov 5 15:03:54.560945 containerd[1976]: time="2025-11-05T15:03:54.560469751Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" id:\"26702dc6254c8d1303fc450ed040ae4ad69cbde088afe7120143a1bb2bce8ad8\" pid:4539 exit_status:1 exited_at:{seconds:1762355034 nanos:559646683}" Nov 5 15:03:54.681109 containerd[1976]: time="2025-11-05T15:03:54.680982056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5895d64bd-fm899,Uid:c36dc63e-c060-4c77-b41d-1b4d1b676e6a,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:55.003461 (udev-worker)[4497]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:55.006652 systemd-networkd[1583]: cali31930132c33: Link UP Nov 5 15:03:55.010129 systemd-networkd[1583]: cali31930132c33: Gained carrier Nov 5 15:03:55.040944 containerd[1976]: 2025-11-05 15:03:54.737 [INFO][4553] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:03:55.040944 containerd[1976]: 2025-11-05 15:03:54.821 [INFO][4553] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0 whisker-5895d64bd- calico-system c36dc63e-c060-4c77-b41d-1b4d1b676e6a 952 0 2025-11-05 15:03:54 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5895d64bd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-21-83 whisker-5895d64bd-fm899 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali31930132c33 [] [] }} ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-" Nov 5 15:03:55.040944 containerd[1976]: 2025-11-05 15:03:54.823 [INFO][4553] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.040944 containerd[1976]: 2025-11-05 15:03:54.920 [INFO][4564] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" HandleID="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Workload="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.920 [INFO][4564] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" HandleID="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Workload="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000351b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-83", "pod":"whisker-5895d64bd-fm899", "timestamp":"2025-11-05 15:03:54.920572185 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.920 [INFO][4564] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.921 [INFO][4564] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.921 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.935 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" host="ip-172-31-21-83" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.944 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.951 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.954 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.958 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:55.041363 containerd[1976]: 2025-11-05 15:03:54.958 [INFO][4564] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" host="ip-172-31-21-83" Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.960 [INFO][4564] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.967 [INFO][4564] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" host="ip-172-31-21-83" Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.980 [INFO][4564] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.1/26] block=192.168.32.0/26 handle="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" host="ip-172-31-21-83" Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.980 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.1/26] handle="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" host="ip-172-31-21-83" Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.980 [INFO][4564] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:55.043209 containerd[1976]: 2025-11-05 15:03:54.981 [INFO][4564] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.1/26] IPv6=[] ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" HandleID="k8s-pod-network.df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Workload="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.043485 containerd[1976]: 2025-11-05 15:03:54.988 [INFO][4553] cni-plugin/k8s.go 418: Populated endpoint ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0", GenerateName:"whisker-5895d64bd-", Namespace:"calico-system", SelfLink:"", UID:"c36dc63e-c060-4c77-b41d-1b4d1b676e6a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5895d64bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"whisker-5895d64bd-fm899", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali31930132c33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:55.043485 containerd[1976]: 2025-11-05 15:03:54.989 [INFO][4553] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.1/32] ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.043674 containerd[1976]: 2025-11-05 15:03:54.989 [INFO][4553] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali31930132c33 ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.043674 containerd[1976]: 2025-11-05 15:03:55.011 [INFO][4553] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.043773 containerd[1976]: 2025-11-05 15:03:55.011 [INFO][4553] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0", GenerateName:"whisker-5895d64bd-", Namespace:"calico-system", SelfLink:"", UID:"c36dc63e-c060-4c77-b41d-1b4d1b676e6a", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5895d64bd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c", Pod:"whisker-5895d64bd-fm899", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.32.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali31930132c33", MAC:"9e:2c:aa:25:85:40", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:55.043895 containerd[1976]: 2025-11-05 15:03:55.036 [INFO][4553] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" Namespace="calico-system" Pod="whisker-5895d64bd-fm899" WorkloadEndpoint="ip--172--31--21--83-k8s-whisker--5895d64bd--fm899-eth0" Nov 5 15:03:55.131234 containerd[1976]: time="2025-11-05T15:03:55.130945734Z" level=info msg="connecting to shim df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c" address="unix:///run/containerd/s/c011790c3c2b9839cf818159575fe55eb0e71d16f0e74b283ec9afd33a71bb20" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:55.188378 systemd[1]: Started cri-containerd-df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c.scope - libcontainer container df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c. Nov 5 15:03:55.421688 containerd[1976]: time="2025-11-05T15:03:55.421358359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5895d64bd-fm899,Uid:c36dc63e-c060-4c77-b41d-1b4d1b676e6a,Namespace:calico-system,Attempt:0,} returns sandbox id \"df577e576957241032e43b04826d53880fbe19063f9661c20c20e44b986e2a1c\"" Nov 5 15:03:55.426729 containerd[1976]: time="2025-11-05T15:03:55.426561739Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:03:55.804611 containerd[1976]: time="2025-11-05T15:03:55.803224293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dcwpj,Uid:e0f7c878-15e6-4e8b-a9be-3e225dd7e262,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:55.805123 containerd[1976]: time="2025-11-05T15:03:55.804983877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jcc47,Uid:7ef28263-0ce9-4955-869b-6ae38808f23b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:55.815470 kubelet[3405]: I1105 15:03:55.815322 3405 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a23cc4fd-4004-43fd-a3bd-e3b7c8798f11" path="/var/lib/kubelet/pods/a23cc4fd-4004-43fd-a3bd-e3b7c8798f11/volumes" Nov 5 15:03:56.187576 containerd[1976]: time="2025-11-05T15:03:56.187508563Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:03:56.191137 containerd[1976]: time="2025-11-05T15:03:56.191054455Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:03:56.191956 containerd[1976]: time="2025-11-05T15:03:56.191104267Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:03:56.192076 kubelet[3405]: E1105 15:03:56.191553 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:03:56.192076 kubelet[3405]: E1105 15:03:56.191617 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:03:56.201769 kubelet[3405]: E1105 15:03:56.201618 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6788880d9504b6b9eeaa6b75dbe9332,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:03:56.206554 containerd[1976]: time="2025-11-05T15:03:56.206446771Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:03:56.240554 systemd-networkd[1583]: cali08c094394f1: Link UP Nov 5 15:03:56.243228 systemd-networkd[1583]: cali08c094394f1: Gained carrier Nov 5 15:03:56.322870 containerd[1976]: 2025-11-05 15:03:55.977 [INFO][4721] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:03:56.322870 containerd[1976]: 2025-11-05 15:03:56.012 [INFO][4721] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0 goldmane-666569f655- calico-system 7ef28263-0ce9-4955-869b-6ae38808f23b 880 0 2025-11-05 15:03:24 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-21-83 goldmane-666569f655-jcc47 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali08c094394f1 [] [] }} ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-" Nov 5 15:03:56.322870 containerd[1976]: 2025-11-05 15:03:56.012 [INFO][4721] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.322870 containerd[1976]: 2025-11-05 15:03:56.121 [INFO][4742] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" HandleID="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Workload="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.122 [INFO][4742] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" HandleID="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Workload="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000387980), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-83", "pod":"goldmane-666569f655-jcc47", "timestamp":"2025-11-05 15:03:56.121561291 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.122 [INFO][4742] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.122 [INFO][4742] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.122 [INFO][4742] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.144 [INFO][4742] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" host="ip-172-31-21-83" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.154 [INFO][4742] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.163 [INFO][4742] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.167 [INFO][4742] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.172 [INFO][4742] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.323276 containerd[1976]: 2025-11-05 15:03:56.173 [INFO][4742] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" host="ip-172-31-21-83" Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.178 [INFO][4742] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490 Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.193 [INFO][4742] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" host="ip-172-31-21-83" Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.217 [INFO][4742] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.2/26] block=192.168.32.0/26 handle="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" host="ip-172-31-21-83" Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.217 [INFO][4742] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.2/26] handle="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" host="ip-172-31-21-83" Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.217 [INFO][4742] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:56.325234 containerd[1976]: 2025-11-05 15:03:56.217 [INFO][4742] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.2/26] IPv6=[] ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" HandleID="k8s-pod-network.8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Workload="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.325601 containerd[1976]: 2025-11-05 15:03:56.229 [INFO][4721] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ef28263-0ce9-4955-869b-6ae38808f23b", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"goldmane-666569f655-jcc47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08c094394f1", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:56.325601 containerd[1976]: 2025-11-05 15:03:56.230 [INFO][4721] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.2/32] ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.325788 containerd[1976]: 2025-11-05 15:03:56.230 [INFO][4721] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08c094394f1 ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.325788 containerd[1976]: 2025-11-05 15:03:56.254 [INFO][4721] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.325881 containerd[1976]: 2025-11-05 15:03:56.271 [INFO][4721] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7ef28263-0ce9-4955-869b-6ae38808f23b", ResourceVersion:"880", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490", Pod:"goldmane-666569f655-jcc47", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.32.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali08c094394f1", MAC:"d6:d1:83:13:19:5c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:56.326006 containerd[1976]: 2025-11-05 15:03:56.315 [INFO][4721] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" Namespace="calico-system" Pod="goldmane-666569f655-jcc47" WorkloadEndpoint="ip--172--31--21--83-k8s-goldmane--666569f655--jcc47-eth0" Nov 5 15:03:56.397092 containerd[1976]: time="2025-11-05T15:03:56.396938960Z" level=info msg="connecting to shim 8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490" address="unix:///run/containerd/s/3eed89a7b6ec48a240b7399e9ccfbe4478bc829de39ac7e2148b613e56b970fd" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:56.487786 systemd-networkd[1583]: cali8fb6dd37f39: Link UP Nov 5 15:03:56.489341 systemd-networkd[1583]: cali8fb6dd37f39: Gained carrier Nov 5 15:03:56.504335 systemd[1]: Started cri-containerd-8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490.scope - libcontainer container 8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490. Nov 5 15:03:56.543053 containerd[1976]: 2025-11-05 15:03:55.944 [INFO][4711] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:03:56.543053 containerd[1976]: 2025-11-05 15:03:55.991 [INFO][4711] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0 coredns-668d6bf9bc- kube-system e0f7c878-15e6-4e8b-a9be-3e225dd7e262 882 0 2025-11-05 15:02:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-83 coredns-668d6bf9bc-dcwpj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8fb6dd37f39 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-" Nov 5 15:03:56.543053 containerd[1976]: 2025-11-05 15:03:55.991 [INFO][4711] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.543053 containerd[1976]: 2025-11-05 15:03:56.125 [INFO][4737] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" HandleID="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.126 [INFO][4737] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" HandleID="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031bea0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-83", "pod":"coredns-668d6bf9bc-dcwpj", "timestamp":"2025-11-05 15:03:56.125498347 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.126 [INFO][4737] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.217 [INFO][4737] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.218 [INFO][4737] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.263 [INFO][4737] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" host="ip-172-31-21-83" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.294 [INFO][4737] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.340 [INFO][4737] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.360 [INFO][4737] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.385 [INFO][4737] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:56.543915 containerd[1976]: 2025-11-05 15:03:56.387 [INFO][4737] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" host="ip-172-31-21-83" Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.395 [INFO][4737] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501 Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.418 [INFO][4737] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" host="ip-172-31-21-83" Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.450 [INFO][4737] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.3/26] block=192.168.32.0/26 handle="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" host="ip-172-31-21-83" Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.451 [INFO][4737] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.3/26] handle="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" host="ip-172-31-21-83" Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.453 [INFO][4737] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:56.546388 containerd[1976]: 2025-11-05 15:03:56.453 [INFO][4737] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.3/26] IPv6=[] ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" HandleID="k8s-pod-network.0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.459 [INFO][4711] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0f7c878-15e6-4e8b-a9be-3e225dd7e262", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"coredns-668d6bf9bc-dcwpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fb6dd37f39", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.459 [INFO][4711] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.3/32] ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.460 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8fb6dd37f39 ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.490 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.499 [INFO][4711] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e0f7c878-15e6-4e8b-a9be-3e225dd7e262", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501", Pod:"coredns-668d6bf9bc-dcwpj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8fb6dd37f39", MAC:"d2:36:b6:53:8e:63", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:56.546667 containerd[1976]: 2025-11-05 15:03:56.537 [INFO][4711] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" Namespace="kube-system" Pod="coredns-668d6bf9bc-dcwpj" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--dcwpj-eth0" Nov 5 15:03:56.565854 containerd[1976]: time="2025-11-05T15:03:56.565697589Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:03:56.573684 containerd[1976]: time="2025-11-05T15:03:56.573492585Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:03:56.573684 containerd[1976]: time="2025-11-05T15:03:56.573630381Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:03:56.574600 kubelet[3405]: E1105 15:03:56.574447 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:03:56.575031 kubelet[3405]: E1105 15:03:56.574777 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:03:56.575766 kubelet[3405]: E1105 15:03:56.575417 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:03:56.577728 kubelet[3405]: E1105 15:03:56.577443 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:03:56.615600 containerd[1976]: time="2025-11-05T15:03:56.615465345Z" level=info msg="connecting to shim 0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501" address="unix:///run/containerd/s/25c027799896358f7e31cc2eb9ab1ec5d6ff7e7415afe521bf842c97e321b138" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:56.693496 systemd[1]: Started cri-containerd-0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501.scope - libcontainer container 0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501. Nov 5 15:03:56.800172 containerd[1976]: time="2025-11-05T15:03:56.799528378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fpblv,Uid:1b5f9275-60ba-4e84-a340-fb8945d27281,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:56.855352 containerd[1976]: time="2025-11-05T15:03:56.854660578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dcwpj,Uid:e0f7c878-15e6-4e8b-a9be-3e225dd7e262,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501\"" Nov 5 15:03:56.858842 containerd[1976]: time="2025-11-05T15:03:56.856907998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-jcc47,Uid:7ef28263-0ce9-4955-869b-6ae38808f23b,Namespace:calico-system,Attempt:0,} returns sandbox id \"8905d67990b1270744c0582223775c718f406be461ce6bbf0735c1a4eef4e490\"" Nov 5 15:03:56.867459 containerd[1976]: time="2025-11-05T15:03:56.867340294Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:03:56.870010 containerd[1976]: time="2025-11-05T15:03:56.869955766Z" level=info msg="CreateContainer within sandbox \"0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:03:56.934417 systemd-networkd[1583]: cali31930132c33: Gained IPv6LL Nov 5 15:03:56.949682 containerd[1976]: time="2025-11-05T15:03:56.949609019Z" level=info msg="Container 61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:56.978742 containerd[1976]: time="2025-11-05T15:03:56.978035087Z" level=info msg="CreateContainer within sandbox \"0aa1ac73a5161bff7448981f897d9096883a6e4ceae53d06c2c04cbd8a15f501\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec\"" Nov 5 15:03:56.982954 containerd[1976]: time="2025-11-05T15:03:56.982638491Z" level=info msg="StartContainer for \"61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec\"" Nov 5 15:03:56.990561 containerd[1976]: time="2025-11-05T15:03:56.990493163Z" level=info msg="connecting to shim 61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec" address="unix:///run/containerd/s/25c027799896358f7e31cc2eb9ab1ec5d6ff7e7415afe521bf842c97e321b138" protocol=ttrpc version=3 Nov 5 15:03:57.047000 systemd[1]: Started cri-containerd-61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec.scope - libcontainer container 61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec. Nov 5 15:03:57.142178 containerd[1976]: time="2025-11-05T15:03:57.142058180Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:03:57.144879 containerd[1976]: time="2025-11-05T15:03:57.144485984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:03:57.145396 kubelet[3405]: E1105 15:03:57.145276 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:03:57.147588 kubelet[3405]: E1105 15:03:57.145376 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:03:57.147588 kubelet[3405]: E1105 15:03:57.146483 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvjfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:03:57.149121 kubelet[3405]: E1105 15:03:57.147767 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:03:57.150249 containerd[1976]: time="2025-11-05T15:03:57.147224744Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:03:57.192532 containerd[1976]: time="2025-11-05T15:03:57.191709212Z" level=info msg="StartContainer for \"61eb9e4976dd3c5badf2b4097b912dceb715001ee2e56c2e0e829ec59d3760ec\" returns successfully" Nov 5 15:03:57.234891 systemd-networkd[1583]: cali659955965e5: Link UP Nov 5 15:03:57.238900 systemd-networkd[1583]: cali659955965e5: Gained carrier Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:56.968 [INFO][4878] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0 coredns-668d6bf9bc- kube-system 1b5f9275-60ba-4e84-a340-fb8945d27281 877 0 2025-11-05 15:02:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-21-83 coredns-668d6bf9bc-fpblv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali659955965e5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:56.969 [INFO][4878] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.097 [INFO][4905] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" HandleID="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.097 [INFO][4905] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" HandleID="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003545d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-21-83", "pod":"coredns-668d6bf9bc-fpblv", "timestamp":"2025-11-05 15:03:57.097492988 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.097 [INFO][4905] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.098 [INFO][4905] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.098 [INFO][4905] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.123 [INFO][4905] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.141 [INFO][4905] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.160 [INFO][4905] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.166 [INFO][4905] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.175 [INFO][4905] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.175 [INFO][4905] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.180 [INFO][4905] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.188 [INFO][4905] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.217 [INFO][4905] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.4/26] block=192.168.32.0/26 handle="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.217 [INFO][4905] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.4/26] handle="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" host="ip-172-31-21-83" Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.218 [INFO][4905] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:57.300647 containerd[1976]: 2025-11-05 15:03:57.218 [INFO][4905] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.4/26] IPv6=[] ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" HandleID="k8s-pod-network.062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Workload="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.223 [INFO][4878] cni-plugin/k8s.go 418: Populated endpoint ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b5f9275-60ba-4e84-a340-fb8945d27281", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"coredns-668d6bf9bc-fpblv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali659955965e5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.223 [INFO][4878] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.4/32] ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.224 [INFO][4878] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali659955965e5 ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.233 [INFO][4878] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.235 [INFO][4878] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"1b5f9275-60ba-4e84-a340-fb8945d27281", ResourceVersion:"877", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 2, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f", Pod:"coredns-668d6bf9bc-fpblv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.32.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali659955965e5", MAC:"b2:df:57:43:0a:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:57.305975 containerd[1976]: 2025-11-05 15:03:57.283 [INFO][4878] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" Namespace="kube-system" Pod="coredns-668d6bf9bc-fpblv" WorkloadEndpoint="ip--172--31--21--83-k8s-coredns--668d6bf9bc--fpblv-eth0" Nov 5 15:03:57.312556 kubelet[3405]: E1105 15:03:57.310364 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:03:57.314891 kubelet[3405]: E1105 15:03:57.314480 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:03:57.355492 kubelet[3405]: I1105 15:03:57.355371 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dcwpj" podStartSLOduration=58.355342641 podStartE2EDuration="58.355342641s" podCreationTimestamp="2025-11-05 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:03:57.353277033 +0000 UTC m=+63.897218695" watchObservedRunningTime="2025-11-05 15:03:57.355342641 +0000 UTC m=+63.899284375" Nov 5 15:03:57.382457 systemd-networkd[1583]: cali08c094394f1: Gained IPv6LL Nov 5 15:03:57.393199 containerd[1976]: time="2025-11-05T15:03:57.391689105Z" level=info msg="connecting to shim 062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f" address="unix:///run/containerd/s/2e32e90f384e159c8dbe6895a38784336cc90d79f201003cdec245282605f43a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:57.487198 systemd[1]: Started cri-containerd-062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f.scope - libcontainer container 062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f. Nov 5 15:03:57.609581 containerd[1976]: time="2025-11-05T15:03:57.609465802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fpblv,Uid:1b5f9275-60ba-4e84-a340-fb8945d27281,Namespace:kube-system,Attempt:0,} returns sandbox id \"062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f\"" Nov 5 15:03:57.616320 containerd[1976]: time="2025-11-05T15:03:57.616251598Z" level=info msg="CreateContainer within sandbox \"062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:03:57.640016 containerd[1976]: time="2025-11-05T15:03:57.639476266Z" level=info msg="Container e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:57.653729 containerd[1976]: time="2025-11-05T15:03:57.653219554Z" level=info msg="CreateContainer within sandbox \"062880e56999f7e675db208ab24d9021e4b90272e37edce19472571e9290414f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2\"" Nov 5 15:03:57.656060 containerd[1976]: time="2025-11-05T15:03:57.655974550Z" level=info msg="StartContainer for \"e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2\"" Nov 5 15:03:57.659777 containerd[1976]: time="2025-11-05T15:03:57.659549590Z" level=info msg="connecting to shim e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2" address="unix:///run/containerd/s/2e32e90f384e159c8dbe6895a38784336cc90d79f201003cdec245282605f43a" protocol=ttrpc version=3 Nov 5 15:03:57.709524 systemd[1]: Started cri-containerd-e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2.scope - libcontainer container e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2. Nov 5 15:03:57.803277 containerd[1976]: time="2025-11-05T15:03:57.802681271Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-bnfj9,Uid:8ec6fd17-e646-449e-8324-b2210e743bb4,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:03:57.828862 containerd[1976]: time="2025-11-05T15:03:57.828816095Z" level=info msg="StartContainer for \"e5a4fe286374f3fc72bc7cbb0c1f98ee81cc29034b84a50aac1183b57f7ccca2\" returns successfully" Nov 5 15:03:58.090145 (udev-worker)[4498]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:58.096801 systemd-networkd[1583]: vxlan.calico: Link UP Nov 5 15:03:58.097448 systemd-networkd[1583]: vxlan.calico: Gained carrier Nov 5 15:03:58.190534 systemd-networkd[1583]: calic382c095121: Link UP Nov 5 15:03:58.192503 systemd-networkd[1583]: calic382c095121: Gained carrier Nov 5 15:03:58.194892 (udev-worker)[5082]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:58.215479 systemd-networkd[1583]: cali8fb6dd37f39: Gained IPv6LL Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:57.976 [INFO][5046] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0 calico-apiserver-7bb8dc7b97- calico-apiserver 8ec6fd17-e646-449e-8324-b2210e743bb4 881 0 2025-11-05 15:03:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb8dc7b97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-83 calico-apiserver-7bb8dc7b97-bnfj9 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic382c095121 [] [] }} ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:57.976 [INFO][5046] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.065 [INFO][5063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" HandleID="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.066 [INFO][5063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" HandleID="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120eb0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-83", "pod":"calico-apiserver-7bb8dc7b97-bnfj9", "timestamp":"2025-11-05 15:03:58.065835092 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.066 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.066 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.066 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.085 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.103 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.125 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.131 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.138 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.138 [INFO][5063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.144 [INFO][5063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36 Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.156 [INFO][5063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.171 [INFO][5063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.5/26] block=192.168.32.0/26 handle="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.172 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.5/26] handle="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" host="ip-172-31-21-83" Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.173 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:58.228001 containerd[1976]: 2025-11-05 15:03:58.173 [INFO][5063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.5/26] IPv6=[] ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" HandleID="k8s-pod-network.ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.177 [INFO][5046] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0", GenerateName:"calico-apiserver-7bb8dc7b97-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ec6fd17-e646-449e-8324-b2210e743bb4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb8dc7b97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"calico-apiserver-7bb8dc7b97-bnfj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic382c095121", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.178 [INFO][5046] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.5/32] ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.178 [INFO][5046] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic382c095121 ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.193 [INFO][5046] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.195 [INFO][5046] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0", GenerateName:"calico-apiserver-7bb8dc7b97-", Namespace:"calico-apiserver", SelfLink:"", UID:"8ec6fd17-e646-449e-8324-b2210e743bb4", ResourceVersion:"881", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb8dc7b97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36", Pod:"calico-apiserver-7bb8dc7b97-bnfj9", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic382c095121", MAC:"5e:e6:07:e5:a6:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:58.231340 containerd[1976]: 2025-11-05 15:03:58.221 [INFO][5046] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-bnfj9" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--bnfj9-eth0" Nov 5 15:03:58.324019 kubelet[3405]: E1105 15:03:58.323950 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:03:58.335755 containerd[1976]: time="2025-11-05T15:03:58.333502186Z" level=info msg="connecting to shim ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36" address="unix:///run/containerd/s/a75bdd57cacf8c072cd3af2600107a6f6236753b5588b01f8f5173fcc5a6cf12" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:58.356609 kubelet[3405]: I1105 15:03:58.354819 3405 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fpblv" podStartSLOduration=59.35479681 podStartE2EDuration="59.35479681s" podCreationTimestamp="2025-11-05 15:02:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:03:58.354317158 +0000 UTC m=+64.898258808" watchObservedRunningTime="2025-11-05 15:03:58.35479681 +0000 UTC m=+64.898738472" Nov 5 15:03:58.428486 systemd[1]: Started cri-containerd-ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36.scope - libcontainer container ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36. Nov 5 15:03:58.801941 containerd[1976]: time="2025-11-05T15:03:58.801879252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-784689597c-cgw5w,Uid:10c8b7bb-d826-4668-8911-b97ba8246d4b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:58.803398 containerd[1976]: time="2025-11-05T15:03:58.803334924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qgwk8,Uid:ef9a0063-5427-4eaf-b6d6-01cd9334db4b,Namespace:calico-system,Attempt:0,}" Nov 5 15:03:58.905272 containerd[1976]: time="2025-11-05T15:03:58.903662785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-bnfj9,Uid:8ec6fd17-e646-449e-8324-b2210e743bb4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ce279577170ad2f35801f16e450ded059ddf33e31abb8eba05d7db2fe182ab36\"" Nov 5 15:03:58.914696 containerd[1976]: time="2025-11-05T15:03:58.913478173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:03:59.238708 systemd-networkd[1583]: cali659955965e5: Gained IPv6LL Nov 5 15:03:59.290674 systemd-networkd[1583]: cali05e38bf1bc5: Link UP Nov 5 15:03:59.294247 systemd-networkd[1583]: cali05e38bf1bc5: Gained carrier Nov 5 15:03:59.299242 (udev-worker)[5095]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.068 [INFO][5147] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0 calico-kube-controllers-784689597c- calico-system 10c8b7bb-d826-4668-8911-b97ba8246d4b 883 0 2025-11-05 15:03:30 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:784689597c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-21-83 calico-kube-controllers-784689597c-cgw5w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali05e38bf1bc5 [] [] }} ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.069 [INFO][5147] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.172 [INFO][5170] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" HandleID="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Workload="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.177 [INFO][5170] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" HandleID="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Workload="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1f60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-83", "pod":"calico-kube-controllers-784689597c-cgw5w", "timestamp":"2025-11-05 15:03:59.172633186 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.177 [INFO][5170] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.177 [INFO][5170] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.177 [INFO][5170] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.203 [INFO][5170] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.215 [INFO][5170] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.224 [INFO][5170] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.229 [INFO][5170] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.234 [INFO][5170] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.235 [INFO][5170] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.240 [INFO][5170] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.250 [INFO][5170] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.263 [INFO][5170] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.6/26] block=192.168.32.0/26 handle="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.263 [INFO][5170] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.6/26] handle="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" host="ip-172-31-21-83" Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.264 [INFO][5170] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:59.353554 containerd[1976]: 2025-11-05 15:03:59.265 [INFO][5170] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.6/26] IPv6=[] ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" HandleID="k8s-pod-network.31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Workload="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.272 [INFO][5147] cni-plugin/k8s.go 418: Populated endpoint ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0", GenerateName:"calico-kube-controllers-784689597c-", Namespace:"calico-system", SelfLink:"", UID:"10c8b7bb-d826-4668-8911-b97ba8246d4b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"784689597c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"calico-kube-controllers-784689597c-cgw5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e38bf1bc5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.278 [INFO][5147] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.6/32] ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.278 [INFO][5147] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05e38bf1bc5 ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.292 [INFO][5147] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.297 [INFO][5147] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0", GenerateName:"calico-kube-controllers-784689597c-", Namespace:"calico-system", SelfLink:"", UID:"10c8b7bb-d826-4668-8911-b97ba8246d4b", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"784689597c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f", Pod:"calico-kube-controllers-784689597c-cgw5w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.32.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali05e38bf1bc5", MAC:"d2:33:c9:53:ce:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:59.360545 containerd[1976]: 2025-11-05 15:03:59.327 [INFO][5147] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" Namespace="calico-system" Pod="calico-kube-controllers-784689597c-cgw5w" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--kube--controllers--784689597c--cgw5w-eth0" Nov 5 15:03:59.406944 containerd[1976]: time="2025-11-05T15:03:59.406696835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:03:59.411542 containerd[1976]: time="2025-11-05T15:03:59.411445055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:03:59.411701 containerd[1976]: time="2025-11-05T15:03:59.411605279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:03:59.412621 kubelet[3405]: E1105 15:03:59.411859 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:03:59.414840 kubelet[3405]: E1105 15:03:59.413438 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:03:59.414840 kubelet[3405]: E1105 15:03:59.414413 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:03:59.416224 kubelet[3405]: E1105 15:03:59.415720 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:03:59.458472 containerd[1976]: time="2025-11-05T15:03:59.458283647Z" level=info msg="connecting to shim 31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f" address="unix:///run/containerd/s/8c9b189b7e1cdbf780635c303656d7e672f5811f9859c28d3b5758fbc1961936" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:59.480503 systemd-networkd[1583]: cali50e67c9c338: Link UP Nov 5 15:03:59.481042 systemd-networkd[1583]: cali50e67c9c338: Gained carrier Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.071 [INFO][5148] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0 csi-node-driver- calico-system ef9a0063-5427-4eaf-b6d6-01cd9334db4b 768 0 2025-11-05 15:03:29 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-21-83 csi-node-driver-qgwk8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali50e67c9c338 [] [] }} ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.073 [INFO][5148] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.206 [INFO][5175] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" HandleID="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Workload="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.209 [INFO][5175] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" HandleID="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Workload="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000224e60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-21-83", "pod":"csi-node-driver-qgwk8", "timestamp":"2025-11-05 15:03:59.206929174 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.209 [INFO][5175] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.264 [INFO][5175] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.264 [INFO][5175] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.307 [INFO][5175] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.342 [INFO][5175] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.367 [INFO][5175] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.377 [INFO][5175] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.386 [INFO][5175] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.387 [INFO][5175] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.390 [INFO][5175] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9 Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.400 [INFO][5175] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.434 [INFO][5175] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.7/26] block=192.168.32.0/26 handle="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.434 [INFO][5175] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.7/26] handle="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" host="ip-172-31-21-83" Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.434 [INFO][5175] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:03:59.521614 containerd[1976]: 2025-11-05 15:03:59.437 [INFO][5175] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.7/26] IPv6=[] ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" HandleID="k8s-pod-network.ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Workload="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.448 [INFO][5148] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef9a0063-5427-4eaf-b6d6-01cd9334db4b", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"csi-node-driver-qgwk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50e67c9c338", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.450 [INFO][5148] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.7/32] ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.451 [INFO][5148] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50e67c9c338 ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.482 [INFO][5148] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.483 [INFO][5148] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef9a0063-5427-4eaf-b6d6-01cd9334db4b", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9", Pod:"csi-node-driver-qgwk8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.32.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali50e67c9c338", MAC:"de:51:6a:db:f4:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:03:59.524032 containerd[1976]: 2025-11-05 15:03:59.514 [INFO][5148] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" Namespace="calico-system" Pod="csi-node-driver-qgwk8" WorkloadEndpoint="ip--172--31--21--83-k8s-csi--node--driver--qgwk8-eth0" Nov 5 15:03:59.637389 containerd[1976]: time="2025-11-05T15:03:59.637249224Z" level=info msg="connecting to shim ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9" address="unix:///run/containerd/s/8aab2153543b162d88cfe544f3b780b9c9929febda21ac5f0bcf1d427148c006" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:59.647565 systemd[1]: Started cri-containerd-31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f.scope - libcontainer container 31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f. Nov 5 15:03:59.708443 systemd[1]: Started cri-containerd-ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9.scope - libcontainer container ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9. Nov 5 15:03:59.800538 containerd[1976]: time="2025-11-05T15:03:59.800104009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-7jq9b,Uid:8d9369e7-33c5-42a2-b295-6e6f5445630e,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:03:59.814487 systemd-networkd[1583]: vxlan.calico: Gained IPv6LL Nov 5 15:03:59.878395 systemd-networkd[1583]: calic382c095121: Gained IPv6LL Nov 5 15:04:00.087635 systemd[1]: Started sshd@7-172.31.21.83:22-139.178.89.65:54910.service - OpenSSH per-connection server daemon (139.178.89.65:54910). Nov 5 15:04:00.209243 containerd[1976]: time="2025-11-05T15:04:00.208712519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-qgwk8,Uid:ef9a0063-5427-4eaf-b6d6-01cd9334db4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ac8710d39712ccd079ca5d12aea4cec24448701b6ff24eb0b5e917a16a4712c9\"" Nov 5 15:04:00.228689 containerd[1976]: time="2025-11-05T15:04:00.228467615Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:04:00.239308 containerd[1976]: time="2025-11-05T15:04:00.237036647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-784689597c-cgw5w,Uid:10c8b7bb-d826-4668-8911-b97ba8246d4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"31d48efdd818be40955d8f5fc99585563acb69c61d6824be485febe937f8691f\"" Nov 5 15:04:00.374754 systemd-networkd[1583]: cali9a4b2679e12: Link UP Nov 5 15:04:00.379973 systemd-networkd[1583]: cali9a4b2679e12: Gained carrier Nov 5 15:04:00.395004 sshd[5321]: Accepted publickey for core from 139.178.89.65 port 54910 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:00.409986 kubelet[3405]: E1105 15:04:00.409777 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:04:00.412510 sshd-session[5321]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:00.435296 systemd-logind[1959]: New session 8 of user core. Nov 5 15:04:00.441291 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.004 [INFO][5298] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0 calico-apiserver-7bb8dc7b97- calico-apiserver 8d9369e7-33c5-42a2-b295-6e6f5445630e 878 0 2025-11-05 15:03:13 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7bb8dc7b97 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-21-83 calico-apiserver-7bb8dc7b97-7jq9b eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9a4b2679e12 [] [] }} ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.005 [INFO][5298] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.236 [INFO][5313] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" HandleID="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.243 [INFO][5313] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" HandleID="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400038f7b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-21-83", "pod":"calico-apiserver-7bb8dc7b97-7jq9b", "timestamp":"2025-11-05 15:04:00.236750603 +0000 UTC"}, Hostname:"ip-172-31-21-83", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.243 [INFO][5313] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.246 [INFO][5313] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.246 [INFO][5313] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-21-83' Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.264 [INFO][5313] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.276 [INFO][5313] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.289 [INFO][5313] ipam/ipam.go 511: Trying affinity for 192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.297 [INFO][5313] ipam/ipam.go 158: Attempting to load block cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.305 [INFO][5313] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.32.0/26 host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.306 [INFO][5313] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.32.0/26 handle="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.311 [INFO][5313] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.318 [INFO][5313] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.32.0/26 handle="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.341 [INFO][5313] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.32.8/26] block=192.168.32.0/26 handle="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.341 [INFO][5313] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.32.8/26] handle="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" host="ip-172-31-21-83" Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.343 [INFO][5313] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:04:00.457914 containerd[1976]: 2025-11-05 15:04:00.343 [INFO][5313] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.32.8/26] IPv6=[] ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" HandleID="k8s-pod-network.154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Workload="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.347 [INFO][5298] cni-plugin/k8s.go 418: Populated endpoint ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0", GenerateName:"calico-apiserver-7bb8dc7b97-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d9369e7-33c5-42a2-b295-6e6f5445630e", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb8dc7b97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"", Pod:"calico-apiserver-7bb8dc7b97-7jq9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a4b2679e12", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.348 [INFO][5298] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.32.8/32] ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.348 [INFO][5298] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9a4b2679e12 ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.379 [INFO][5298] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.380 [INFO][5298] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0", GenerateName:"calico-apiserver-7bb8dc7b97-", Namespace:"calico-apiserver", SelfLink:"", UID:"8d9369e7-33c5-42a2-b295-6e6f5445630e", ResourceVersion:"878", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 3, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7bb8dc7b97", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-21-83", ContainerID:"154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d", Pod:"calico-apiserver-7bb8dc7b97-7jq9b", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.32.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9a4b2679e12", MAC:"ee:3f:6a:3a:fa:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:04:00.463771 containerd[1976]: 2025-11-05 15:04:00.431 [INFO][5298] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" Namespace="calico-apiserver" Pod="calico-apiserver-7bb8dc7b97-7jq9b" WorkloadEndpoint="ip--172--31--21--83-k8s-calico--apiserver--7bb8dc7b97--7jq9b-eth0" Nov 5 15:04:00.565863 containerd[1976]: time="2025-11-05T15:04:00.564798205Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:00.570182 containerd[1976]: time="2025-11-05T15:04:00.568794853Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:04:00.570182 containerd[1976]: time="2025-11-05T15:04:00.568916725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:04:00.570411 kubelet[3405]: E1105 15:04:00.569984 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:00.570411 kubelet[3405]: E1105 15:04:00.570315 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:00.571692 kubelet[3405]: E1105 15:04:00.570746 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:00.576199 containerd[1976]: time="2025-11-05T15:04:00.573392761Z" level=info msg="connecting to shim 154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d" address="unix:///run/containerd/s/e655e86456ffae2089942682ffaad3bc21899f7479f36fe70c159580b237ad0c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:04:00.577597 containerd[1976]: time="2025-11-05T15:04:00.577404757Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:04:00.699631 systemd[1]: Started cri-containerd-154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d.scope - libcontainer container 154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d. Nov 5 15:04:00.774452 systemd-networkd[1583]: cali05e38bf1bc5: Gained IPv6LL Nov 5 15:04:00.855625 containerd[1976]: time="2025-11-05T15:04:00.855547682Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7bb8dc7b97-7jq9b,Uid:8d9369e7-33c5-42a2-b295-6e6f5445630e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"154b56f97e4be655347f10da868b944022136d72dd5596fd0f7a7369f6cacc5d\"" Nov 5 15:04:00.910603 sshd[5360]: Connection closed by 139.178.89.65 port 54910 Nov 5 15:04:00.910462 sshd-session[5321]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:00.917021 containerd[1976]: time="2025-11-05T15:04:00.916936251Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:00.920964 containerd[1976]: time="2025-11-05T15:04:00.920471811Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:04:00.921962 containerd[1976]: time="2025-11-05T15:04:00.920528691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:04:00.923982 kubelet[3405]: E1105 15:04:00.922546 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:00.923982 kubelet[3405]: E1105 15:04:00.922656 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:00.922706 systemd-logind[1959]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:04:00.924648 systemd[1]: sshd@7-172.31.21.83:22-139.178.89.65:54910.service: Deactivated successfully. Nov 5 15:04:00.925974 containerd[1976]: time="2025-11-05T15:04:00.925117203Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:04:00.926603 kubelet[3405]: E1105 15:04:00.926434 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:00.930696 kubelet[3405]: E1105 15:04:00.930603 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:00.932347 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:04:00.940114 systemd-logind[1959]: Removed session 8. Nov 5 15:04:01.216362 containerd[1976]: time="2025-11-05T15:04:01.216296700Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:01.218814 containerd[1976]: time="2025-11-05T15:04:01.218710848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:04:01.219076 containerd[1976]: time="2025-11-05T15:04:01.218741568Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:04:01.219311 kubelet[3405]: E1105 15:04:01.219145 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:01.219311 kubelet[3405]: E1105 15:04:01.219243 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:01.219772 containerd[1976]: time="2025-11-05T15:04:01.219709968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:04:01.220101 kubelet[3405]: E1105 15:04:01.219523 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:01.221754 kubelet[3405]: E1105 15:04:01.221642 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:01.402024 kubelet[3405]: E1105 15:04:01.401958 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:01.403489 kubelet[3405]: E1105 15:04:01.403263 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:01.478782 systemd-networkd[1583]: cali50e67c9c338: Gained IPv6LL Nov 5 15:04:01.537351 containerd[1976]: time="2025-11-05T15:04:01.537118166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:01.539459 containerd[1976]: time="2025-11-05T15:04:01.539307734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:04:01.539459 containerd[1976]: time="2025-11-05T15:04:01.539337074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:01.539746 kubelet[3405]: E1105 15:04:01.539667 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:01.539836 kubelet[3405]: E1105 15:04:01.539744 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:01.540038 kubelet[3405]: E1105 15:04:01.539925 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:01.541429 kubelet[3405]: E1105 15:04:01.541351 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:04:02.118587 systemd-networkd[1583]: cali9a4b2679e12: Gained IPv6LL Nov 5 15:04:02.405537 kubelet[3405]: E1105 15:04:02.403840 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:04:04.590094 ntpd[1950]: Listen normally on 6 vxlan.calico 192.168.32.0:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 6 vxlan.calico 192.168.32.0:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 7 cali31930132c33 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 8 cali08c094394f1 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 9 cali8fb6dd37f39 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 10 cali659955965e5 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 11 vxlan.calico [fe80::64f8:cff:fe72:76fd%8]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 12 calic382c095121 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 13 cali05e38bf1bc5 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 14 cali50e67c9c338 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 15:04:04.590704 ntpd[1950]: 5 Nov 15:04:04 ntpd[1950]: Listen normally on 15 cali9a4b2679e12 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 15:04:04.590224 ntpd[1950]: Listen normally on 7 cali31930132c33 [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 15:04:04.590325 ntpd[1950]: Listen normally on 8 cali08c094394f1 [fe80::ecee:eeff:feee:eeee%5]:123 Nov 5 15:04:04.590376 ntpd[1950]: Listen normally on 9 cali8fb6dd37f39 [fe80::ecee:eeff:feee:eeee%6]:123 Nov 5 15:04:04.590422 ntpd[1950]: Listen normally on 10 cali659955965e5 [fe80::ecee:eeff:feee:eeee%7]:123 Nov 5 15:04:04.590468 ntpd[1950]: Listen normally on 11 vxlan.calico [fe80::64f8:cff:fe72:76fd%8]:123 Nov 5 15:04:04.590513 ntpd[1950]: Listen normally on 12 calic382c095121 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 15:04:04.590557 ntpd[1950]: Listen normally on 13 cali05e38bf1bc5 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 15:04:04.590600 ntpd[1950]: Listen normally on 14 cali50e67c9c338 [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 15:04:04.590645 ntpd[1950]: Listen normally on 15 cali9a4b2679e12 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 15:04:05.948008 systemd[1]: Started sshd@8-172.31.21.83:22-139.178.89.65:54922.service - OpenSSH per-connection server daemon (139.178.89.65:54922). Nov 5 15:04:06.152208 sshd[5440]: Accepted publickey for core from 139.178.89.65 port 54922 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:06.154641 sshd-session[5440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:06.165291 systemd-logind[1959]: New session 9 of user core. Nov 5 15:04:06.170475 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:04:06.438004 sshd[5443]: Connection closed by 139.178.89.65 port 54922 Nov 5 15:04:06.439291 sshd-session[5440]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:06.446685 systemd-logind[1959]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:04:06.448003 systemd[1]: sshd@8-172.31.21.83:22-139.178.89.65:54922.service: Deactivated successfully. Nov 5 15:04:06.453009 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:04:06.459347 systemd-logind[1959]: Removed session 9. Nov 5 15:04:08.801499 containerd[1976]: time="2025-11-05T15:04:08.801130762Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:04:09.821745 containerd[1976]: time="2025-11-05T15:04:09.821663087Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:09.822914 containerd[1976]: time="2025-11-05T15:04:09.822838211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:04:09.823036 containerd[1976]: time="2025-11-05T15:04:09.823000019Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:04:09.823390 kubelet[3405]: E1105 15:04:09.823315 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:04:09.824685 kubelet[3405]: E1105 15:04:09.823391 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:04:09.824685 kubelet[3405]: E1105 15:04:09.823838 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6788880d9504b6b9eeaa6b75dbe9332,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:09.826332 containerd[1976]: time="2025-11-05T15:04:09.823853795Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:04:10.148426 containerd[1976]: time="2025-11-05T15:04:10.148352408Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:10.149485 containerd[1976]: time="2025-11-05T15:04:10.149417060Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:04:10.150195 containerd[1976]: time="2025-11-05T15:04:10.149454992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:10.150346 kubelet[3405]: E1105 15:04:10.149740 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:04:10.150346 kubelet[3405]: E1105 15:04:10.149802 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:04:10.150666 kubelet[3405]: E1105 15:04:10.150473 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvjfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:10.151471 containerd[1976]: time="2025-11-05T15:04:10.151421156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:04:10.152643 kubelet[3405]: E1105 15:04:10.152567 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:04:10.467728 containerd[1976]: time="2025-11-05T15:04:10.467475058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:10.468865 containerd[1976]: time="2025-11-05T15:04:10.468733558Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:04:10.468865 containerd[1976]: time="2025-11-05T15:04:10.468816694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:04:10.469712 kubelet[3405]: E1105 15:04:10.469482 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:04:10.469712 kubelet[3405]: E1105 15:04:10.469580 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:04:10.470296 kubelet[3405]: E1105 15:04:10.469999 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:10.471524 kubelet[3405]: E1105 15:04:10.471441 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:04:11.474705 systemd[1]: Started sshd@9-172.31.21.83:22-139.178.89.65:38278.service - OpenSSH per-connection server daemon (139.178.89.65:38278). Nov 5 15:04:11.671099 sshd[5465]: Accepted publickey for core from 139.178.89.65 port 38278 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:11.673569 sshd-session[5465]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:11.681850 systemd-logind[1959]: New session 10 of user core. Nov 5 15:04:11.688457 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:04:11.966397 sshd[5468]: Connection closed by 139.178.89.65 port 38278 Nov 5 15:04:11.967577 sshd-session[5465]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:11.976083 systemd[1]: sshd@9-172.31.21.83:22-139.178.89.65:38278.service: Deactivated successfully. Nov 5 15:04:11.981297 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:04:11.985993 systemd-logind[1959]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:04:12.004571 systemd[1]: Started sshd@10-172.31.21.83:22-139.178.89.65:38284.service - OpenSSH per-connection server daemon (139.178.89.65:38284). Nov 5 15:04:12.008120 systemd-logind[1959]: Removed session 10. Nov 5 15:04:12.195716 sshd[5481]: Accepted publickey for core from 139.178.89.65 port 38284 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:12.198086 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:12.207294 systemd-logind[1959]: New session 11 of user core. Nov 5 15:04:12.228443 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:04:12.565423 sshd[5484]: Connection closed by 139.178.89.65 port 38284 Nov 5 15:04:12.566373 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:12.581448 systemd[1]: sshd@10-172.31.21.83:22-139.178.89.65:38284.service: Deactivated successfully. Nov 5 15:04:12.589616 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:04:12.593375 systemd-logind[1959]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:04:12.621092 systemd[1]: Started sshd@11-172.31.21.83:22-139.178.89.65:38298.service - OpenSSH per-connection server daemon (139.178.89.65:38298). Nov 5 15:04:12.626605 systemd-logind[1959]: Removed session 11. Nov 5 15:04:12.818860 sshd[5493]: Accepted publickey for core from 139.178.89.65 port 38298 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:12.819834 sshd-session[5493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:12.828858 systemd-logind[1959]: New session 12 of user core. Nov 5 15:04:12.834427 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:04:13.131333 sshd[5496]: Connection closed by 139.178.89.65 port 38298 Nov 5 15:04:13.130876 sshd-session[5493]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:13.139553 systemd[1]: sshd@11-172.31.21.83:22-139.178.89.65:38298.service: Deactivated successfully. Nov 5 15:04:13.143268 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:04:13.148272 systemd-logind[1959]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:04:13.151661 systemd-logind[1959]: Removed session 12. Nov 5 15:04:13.807862 containerd[1976]: time="2025-11-05T15:04:13.807698859Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:04:14.113595 containerd[1976]: time="2025-11-05T15:04:14.113287176Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:14.115585 containerd[1976]: time="2025-11-05T15:04:14.115512240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:04:14.115692 containerd[1976]: time="2025-11-05T15:04:14.115633956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:04:14.115919 kubelet[3405]: E1105 15:04:14.115859 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:14.116786 kubelet[3405]: E1105 15:04:14.115928 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:14.118254 containerd[1976]: time="2025-11-05T15:04:14.117266988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:04:14.118561 kubelet[3405]: E1105 15:04:14.117444 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:14.456589 containerd[1976]: time="2025-11-05T15:04:14.456388370Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:14.458800 containerd[1976]: time="2025-11-05T15:04:14.458658458Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:04:14.458800 containerd[1976]: time="2025-11-05T15:04:14.458737286Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:14.459275 kubelet[3405]: E1105 15:04:14.459197 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:14.459275 kubelet[3405]: E1105 15:04:14.459267 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:14.461124 containerd[1976]: time="2025-11-05T15:04:14.460324154Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:04:14.461272 kubelet[3405]: E1105 15:04:14.460367 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:14.462173 kubelet[3405]: E1105 15:04:14.462083 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:04:14.812571 containerd[1976]: time="2025-11-05T15:04:14.812380696Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:14.815502 containerd[1976]: time="2025-11-05T15:04:14.815341240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:04:14.815502 containerd[1976]: time="2025-11-05T15:04:14.815382268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:04:14.816051 kubelet[3405]: E1105 15:04:14.815692 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:14.816051 kubelet[3405]: E1105 15:04:14.815760 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:14.816845 kubelet[3405]: E1105 15:04:14.816747 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:14.817551 containerd[1976]: time="2025-11-05T15:04:14.817334632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:04:14.818408 kubelet[3405]: E1105 15:04:14.818304 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:15.115578 containerd[1976]: time="2025-11-05T15:04:15.115356937Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:15.117693 containerd[1976]: time="2025-11-05T15:04:15.117612073Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:04:15.117693 containerd[1976]: time="2025-11-05T15:04:15.117645481Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:04:15.117984 kubelet[3405]: E1105 15:04:15.117929 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:15.118528 kubelet[3405]: E1105 15:04:15.117989 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:15.118528 kubelet[3405]: E1105 15:04:15.118252 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:15.120055 kubelet[3405]: E1105 15:04:15.119990 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:16.801492 containerd[1976]: time="2025-11-05T15:04:16.801314117Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:04:17.380748 containerd[1976]: time="2025-11-05T15:04:17.380628436Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:17.383066 containerd[1976]: time="2025-11-05T15:04:17.383000740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:04:17.383490 containerd[1976]: time="2025-11-05T15:04:17.383126452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:17.384352 kubelet[3405]: E1105 15:04:17.383345 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:17.384352 kubelet[3405]: E1105 15:04:17.383404 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:17.384352 kubelet[3405]: E1105 15:04:17.383586 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:17.385362 kubelet[3405]: E1105 15:04:17.385301 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:04:18.171633 systemd[1]: Started sshd@12-172.31.21.83:22-139.178.89.65:36922.service - OpenSSH per-connection server daemon (139.178.89.65:36922). Nov 5 15:04:18.374531 sshd[5508]: Accepted publickey for core from 139.178.89.65 port 36922 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:18.377736 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:18.387284 systemd-logind[1959]: New session 13 of user core. Nov 5 15:04:18.394436 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:04:18.659346 sshd[5511]: Connection closed by 139.178.89.65 port 36922 Nov 5 15:04:18.659091 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:18.670055 systemd[1]: sshd@12-172.31.21.83:22-139.178.89.65:36922.service: Deactivated successfully. Nov 5 15:04:18.676416 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:04:18.680087 systemd-logind[1959]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:04:18.683067 systemd-logind[1959]: Removed session 13. Nov 5 15:04:21.800139 kubelet[3405]: E1105 15:04:21.800056 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:04:23.696438 systemd[1]: Started sshd@13-172.31.21.83:22-139.178.89.65:36932.service - OpenSSH per-connection server daemon (139.178.89.65:36932). Nov 5 15:04:23.807847 kubelet[3405]: E1105 15:04:23.807781 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:04:23.907663 sshd[5534]: Accepted publickey for core from 139.178.89.65 port 36932 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:23.910024 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:23.921070 systemd-logind[1959]: New session 14 of user core. Nov 5 15:04:23.925525 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:04:24.187196 sshd[5537]: Connection closed by 139.178.89.65 port 36932 Nov 5 15:04:24.186184 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:24.193498 systemd[1]: sshd@13-172.31.21.83:22-139.178.89.65:36932.service: Deactivated successfully. Nov 5 15:04:24.197924 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:04:24.200203 systemd-logind[1959]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:04:24.203453 systemd-logind[1959]: Removed session 14. Nov 5 15:04:24.396373 containerd[1976]: time="2025-11-05T15:04:24.396103559Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" id:\"ff5f90aedf83319146a4fc95cdb5a4147dd4e5fd8d43d2fff7f67e0d626ab964\" pid:5561 exit_status:1 exited_at:{seconds:1762355064 nanos:395609471}" Nov 5 15:04:25.801454 kubelet[3405]: E1105 15:04:25.800957 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:25.804791 kubelet[3405]: E1105 15:04:25.803884 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:04:26.802583 kubelet[3405]: E1105 15:04:26.801976 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:29.230355 systemd[1]: Started sshd@14-172.31.21.83:22-139.178.89.65:49188.service - OpenSSH per-connection server daemon (139.178.89.65:49188). Nov 5 15:04:29.443971 sshd[5575]: Accepted publickey for core from 139.178.89.65 port 49188 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:29.447237 sshd-session[5575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:29.458871 systemd-logind[1959]: New session 15 of user core. Nov 5 15:04:29.466492 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:04:29.744761 sshd[5578]: Connection closed by 139.178.89.65 port 49188 Nov 5 15:04:29.770346 sshd-session[5575]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:29.778135 systemd[1]: sshd@14-172.31.21.83:22-139.178.89.65:49188.service: Deactivated successfully. Nov 5 15:04:29.783138 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:04:29.786957 systemd-logind[1959]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:04:29.791649 systemd-logind[1959]: Removed session 15. Nov 5 15:04:31.801691 kubelet[3405]: E1105 15:04:31.801485 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:04:34.784946 systemd[1]: Started sshd@15-172.31.21.83:22-139.178.89.65:49200.service - OpenSSH per-connection server daemon (139.178.89.65:49200). Nov 5 15:04:34.801045 containerd[1976]: time="2025-11-05T15:04:34.800909963Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:04:35.003181 sshd[5595]: Accepted publickey for core from 139.178.89.65 port 49200 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:35.006865 sshd-session[5595]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:35.017271 systemd-logind[1959]: New session 16 of user core. Nov 5 15:04:35.022689 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:04:35.289131 sshd[5598]: Connection closed by 139.178.89.65 port 49200 Nov 5 15:04:35.290200 sshd-session[5595]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:35.298199 systemd-logind[1959]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:04:35.298529 systemd[1]: sshd@15-172.31.21.83:22-139.178.89.65:49200.service: Deactivated successfully. Nov 5 15:04:35.304484 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:04:35.309406 systemd-logind[1959]: Removed session 16. Nov 5 15:04:35.327702 systemd[1]: Started sshd@16-172.31.21.83:22-139.178.89.65:49216.service - OpenSSH per-connection server daemon (139.178.89.65:49216). Nov 5 15:04:35.355649 containerd[1976]: time="2025-11-05T15:04:35.355377406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:35.359333 containerd[1976]: time="2025-11-05T15:04:35.358270138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:04:35.360256 containerd[1976]: time="2025-11-05T15:04:35.358359478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:35.360360 kubelet[3405]: E1105 15:04:35.359540 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:04:35.360360 kubelet[3405]: E1105 15:04:35.359597 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:04:35.361941 kubelet[3405]: E1105 15:04:35.360187 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvjfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:35.361941 kubelet[3405]: E1105 15:04:35.361905 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:04:35.541902 sshd[5610]: Accepted publickey for core from 139.178.89.65 port 49216 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:35.544792 sshd-session[5610]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:35.554440 systemd-logind[1959]: New session 17 of user core. Nov 5 15:04:35.564505 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:04:36.800689 containerd[1976]: time="2025-11-05T15:04:36.800328373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:04:36.950872 sshd[5613]: Connection closed by 139.178.89.65 port 49216 Nov 5 15:04:36.952197 sshd-session[5610]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:36.959477 systemd[1]: sshd@16-172.31.21.83:22-139.178.89.65:49216.service: Deactivated successfully. Nov 5 15:04:36.962532 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:04:36.966933 systemd-logind[1959]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:04:36.970354 systemd-logind[1959]: Removed session 17. Nov 5 15:04:36.989606 systemd[1]: Started sshd@17-172.31.21.83:22-139.178.89.65:43888.service - OpenSSH per-connection server daemon (139.178.89.65:43888). Nov 5 15:04:37.151486 containerd[1976]: time="2025-11-05T15:04:37.151208999Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:37.153556 containerd[1976]: time="2025-11-05T15:04:37.153361691Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:04:37.153556 containerd[1976]: time="2025-11-05T15:04:37.153495359Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:04:37.153811 kubelet[3405]: E1105 15:04:37.153705 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:04:37.153811 kubelet[3405]: E1105 15:04:37.153772 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:04:37.155218 kubelet[3405]: E1105 15:04:37.153927 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6788880d9504b6b9eeaa6b75dbe9332,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:37.159624 containerd[1976]: time="2025-11-05T15:04:37.159363191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:04:37.204543 sshd[5623]: Accepted publickey for core from 139.178.89.65 port 43888 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:37.207353 sshd-session[5623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:37.215901 systemd-logind[1959]: New session 18 of user core. Nov 5 15:04:37.230652 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:04:37.489614 containerd[1976]: time="2025-11-05T15:04:37.489266208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:37.491680 containerd[1976]: time="2025-11-05T15:04:37.491501904Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:04:37.491904 containerd[1976]: time="2025-11-05T15:04:37.491569068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:04:37.492109 kubelet[3405]: E1105 15:04:37.492051 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:04:37.492229 kubelet[3405]: E1105 15:04:37.492141 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:04:37.492865 kubelet[3405]: E1105 15:04:37.492560 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:37.494341 kubelet[3405]: E1105 15:04:37.494234 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:04:37.804137 containerd[1976]: time="2025-11-05T15:04:37.803476730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:04:38.088723 containerd[1976]: time="2025-11-05T15:04:38.088573751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:38.090899 containerd[1976]: time="2025-11-05T15:04:38.090825095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:04:38.091038 containerd[1976]: time="2025-11-05T15:04:38.090955559Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:38.092217 kubelet[3405]: E1105 15:04:38.091206 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:38.092217 kubelet[3405]: E1105 15:04:38.091273 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:38.092588 kubelet[3405]: E1105 15:04:38.092437 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:38.093806 kubelet[3405]: E1105 15:04:38.093733 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:04:38.588642 sshd[5626]: Connection closed by 139.178.89.65 port 43888 Nov 5 15:04:38.589462 sshd-session[5623]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:38.601412 systemd[1]: sshd@17-172.31.21.83:22-139.178.89.65:43888.service: Deactivated successfully. Nov 5 15:04:38.608972 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:04:38.612456 systemd-logind[1959]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:04:38.641985 systemd[1]: Started sshd@18-172.31.21.83:22-139.178.89.65:43894.service - OpenSSH per-connection server daemon (139.178.89.65:43894). Nov 5 15:04:38.645815 systemd-logind[1959]: Removed session 18. Nov 5 15:04:38.843839 sshd[5658]: Accepted publickey for core from 139.178.89.65 port 43894 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:38.845841 sshd-session[5658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:38.858666 systemd-logind[1959]: New session 19 of user core. Nov 5 15:04:38.864472 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:04:39.458029 sshd[5663]: Connection closed by 139.178.89.65 port 43894 Nov 5 15:04:39.459720 sshd-session[5658]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:39.472937 systemd[1]: sshd@18-172.31.21.83:22-139.178.89.65:43894.service: Deactivated successfully. Nov 5 15:04:39.482027 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:04:39.489243 systemd-logind[1959]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:04:39.514133 systemd[1]: Started sshd@19-172.31.21.83:22-139.178.89.65:43902.service - OpenSSH per-connection server daemon (139.178.89.65:43902). Nov 5 15:04:39.516580 systemd-logind[1959]: Removed session 19. Nov 5 15:04:39.710439 sshd[5673]: Accepted publickey for core from 139.178.89.65 port 43902 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:39.712930 sshd-session[5673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:39.724781 systemd-logind[1959]: New session 20 of user core. Nov 5 15:04:39.734471 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:04:39.801100 containerd[1976]: time="2025-11-05T15:04:39.800889208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:04:39.991754 sshd[5676]: Connection closed by 139.178.89.65 port 43902 Nov 5 15:04:39.992461 sshd-session[5673]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:40.000899 systemd[1]: sshd@19-172.31.21.83:22-139.178.89.65:43902.service: Deactivated successfully. Nov 5 15:04:40.006478 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:04:40.008295 systemd-logind[1959]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:04:40.012974 systemd-logind[1959]: Removed session 20. Nov 5 15:04:40.114399 containerd[1976]: time="2025-11-05T15:04:40.114311137Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:40.116640 containerd[1976]: time="2025-11-05T15:04:40.116525449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:04:40.116640 containerd[1976]: time="2025-11-05T15:04:40.116588209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:04:40.116971 kubelet[3405]: E1105 15:04:40.116798 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:40.116971 kubelet[3405]: E1105 15:04:40.116863 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:04:40.117580 kubelet[3405]: E1105 15:04:40.117038 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:40.118345 kubelet[3405]: E1105 15:04:40.118286 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:41.803177 containerd[1976]: time="2025-11-05T15:04:41.802799814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:04:42.133610 containerd[1976]: time="2025-11-05T15:04:42.133526403Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:42.135944 containerd[1976]: time="2025-11-05T15:04:42.135828627Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:04:42.136041 containerd[1976]: time="2025-11-05T15:04:42.135912555Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:04:42.137081 kubelet[3405]: E1105 15:04:42.136354 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:42.137081 kubelet[3405]: E1105 15:04:42.136431 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:04:42.137081 kubelet[3405]: E1105 15:04:42.136592 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:42.140972 containerd[1976]: time="2025-11-05T15:04:42.140602719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:04:42.656704 containerd[1976]: time="2025-11-05T15:04:42.656595606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:42.658890 containerd[1976]: time="2025-11-05T15:04:42.658775142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:04:42.659029 containerd[1976]: time="2025-11-05T15:04:42.658885338Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:04:42.659786 kubelet[3405]: E1105 15:04:42.659318 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:42.659786 kubelet[3405]: E1105 15:04:42.659435 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:04:42.659786 kubelet[3405]: E1105 15:04:42.659596 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:42.661051 kubelet[3405]: E1105 15:04:42.660970 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:42.800591 containerd[1976]: time="2025-11-05T15:04:42.800394187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:04:43.190134 containerd[1976]: time="2025-11-05T15:04:43.189926069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:04:43.192290 containerd[1976]: time="2025-11-05T15:04:43.192208565Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:04:43.192424 containerd[1976]: time="2025-11-05T15:04:43.192341825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:04:43.192715 kubelet[3405]: E1105 15:04:43.192628 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:43.192715 kubelet[3405]: E1105 15:04:43.192707 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:04:43.194816 kubelet[3405]: E1105 15:04:43.192890 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:04:43.194816 kubelet[3405]: E1105 15:04:43.194742 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:04:45.037804 systemd[1]: Started sshd@20-172.31.21.83:22-139.178.89.65:43906.service - OpenSSH per-connection server daemon (139.178.89.65:43906). Nov 5 15:04:45.240269 sshd[5696]: Accepted publickey for core from 139.178.89.65 port 43906 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:45.243655 sshd-session[5696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:45.253577 systemd-logind[1959]: New session 21 of user core. Nov 5 15:04:45.260453 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:04:45.510186 sshd[5699]: Connection closed by 139.178.89.65 port 43906 Nov 5 15:04:45.511238 sshd-session[5696]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:45.524447 systemd[1]: sshd@20-172.31.21.83:22-139.178.89.65:43906.service: Deactivated successfully. Nov 5 15:04:45.530730 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:04:45.536026 systemd-logind[1959]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:04:45.540333 systemd-logind[1959]: Removed session 21. Nov 5 15:04:49.799628 kubelet[3405]: E1105 15:04:49.799492 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:04:50.550834 systemd[1]: Started sshd@21-172.31.21.83:22-139.178.89.65:53736.service - OpenSSH per-connection server daemon (139.178.89.65:53736). Nov 5 15:04:50.780020 sshd[5713]: Accepted publickey for core from 139.178.89.65 port 53736 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:50.782567 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:50.793563 systemd-logind[1959]: New session 22 of user core. Nov 5 15:04:50.798914 kubelet[3405]: E1105 15:04:50.798798 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:04:50.801750 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:04:51.069326 sshd[5716]: Connection closed by 139.178.89.65 port 53736 Nov 5 15:04:51.070239 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:51.078794 systemd-logind[1959]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:04:51.080752 systemd[1]: sshd@21-172.31.21.83:22-139.178.89.65:53736.service: Deactivated successfully. Nov 5 15:04:51.085252 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:04:51.092116 systemd-logind[1959]: Removed session 22. Nov 5 15:04:51.805746 kubelet[3405]: E1105 15:04:51.805342 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:04:52.800171 kubelet[3405]: E1105 15:04:52.800042 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:04:54.397340 containerd[1976]: time="2025-11-05T15:04:54.397264444Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" id:\"e53cfd87af16199617efad982ac752e3af65a6340a29bbe97ab9e29d8ada9b13\" pid:5742 exited_at:{seconds:1762355094 nanos:396756568}" Nov 5 15:04:55.804416 kubelet[3405]: E1105 15:04:55.803823 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:04:56.109357 systemd[1]: Started sshd@22-172.31.21.83:22-139.178.89.65:40154.service - OpenSSH per-connection server daemon (139.178.89.65:40154). Nov 5 15:04:56.310631 sshd[5754]: Accepted publickey for core from 139.178.89.65 port 40154 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:04:56.314509 sshd-session[5754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:04:56.327992 systemd-logind[1959]: New session 23 of user core. Nov 5 15:04:56.335533 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:04:56.589766 sshd[5757]: Connection closed by 139.178.89.65 port 40154 Nov 5 15:04:56.590660 sshd-session[5754]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:56.599098 systemd[1]: sshd@22-172.31.21.83:22-139.178.89.65:40154.service: Deactivated successfully. Nov 5 15:04:56.604760 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:04:56.609832 systemd-logind[1959]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:04:56.614513 systemd-logind[1959]: Removed session 23. Nov 5 15:04:57.809419 kubelet[3405]: E1105 15:04:57.808739 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:05:01.629437 systemd[1]: Started sshd@23-172.31.21.83:22-139.178.89.65:40168.service - OpenSSH per-connection server daemon (139.178.89.65:40168). Nov 5 15:05:01.846132 sshd[5773]: Accepted publickey for core from 139.178.89.65 port 40168 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:01.850008 sshd-session[5773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:01.862084 systemd-logind[1959]: New session 24 of user core. Nov 5 15:05:01.868813 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:05:02.209108 sshd[5776]: Connection closed by 139.178.89.65 port 40168 Nov 5 15:05:02.212222 sshd-session[5773]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:02.219698 systemd[1]: sshd@23-172.31.21.83:22-139.178.89.65:40168.service: Deactivated successfully. Nov 5 15:05:02.230600 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:05:02.235564 systemd-logind[1959]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:05:02.241518 systemd-logind[1959]: Removed session 24. Nov 5 15:05:02.798819 kubelet[3405]: E1105 15:05:02.798742 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:05:02.802185 kubelet[3405]: E1105 15:05:02.802019 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:05:03.803975 kubelet[3405]: E1105 15:05:03.803739 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:05:06.807001 kubelet[3405]: E1105 15:05:06.806823 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:05:07.254728 systemd[1]: Started sshd@24-172.31.21.83:22-139.178.89.65:45046.service - OpenSSH per-connection server daemon (139.178.89.65:45046). Nov 5 15:05:07.476296 sshd[5788]: Accepted publickey for core from 139.178.89.65 port 45046 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:07.480096 sshd-session[5788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:07.498257 systemd-logind[1959]: New session 25 of user core. Nov 5 15:05:07.504545 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:05:07.803704 kubelet[3405]: E1105 15:05:07.803123 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:05:07.828471 sshd[5791]: Connection closed by 139.178.89.65 port 45046 Nov 5 15:05:07.828959 sshd-session[5788]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:07.844503 systemd[1]: sshd@24-172.31.21.83:22-139.178.89.65:45046.service: Deactivated successfully. Nov 5 15:05:07.844958 systemd-logind[1959]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:05:07.851759 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:05:07.861703 systemd-logind[1959]: Removed session 25. Nov 5 15:05:08.801085 kubelet[3405]: E1105 15:05:08.799844 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:05:12.863865 systemd[1]: Started sshd@25-172.31.21.83:22-139.178.89.65:45062.service - OpenSSH per-connection server daemon (139.178.89.65:45062). Nov 5 15:05:13.089125 sshd[5804]: Accepted publickey for core from 139.178.89.65 port 45062 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:13.091791 sshd-session[5804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:13.103854 systemd-logind[1959]: New session 26 of user core. Nov 5 15:05:13.118053 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:05:13.440700 sshd[5807]: Connection closed by 139.178.89.65 port 45062 Nov 5 15:05:13.440348 sshd-session[5804]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:13.453215 systemd[1]: sshd@25-172.31.21.83:22-139.178.89.65:45062.service: Deactivated successfully. Nov 5 15:05:13.454351 systemd-logind[1959]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:05:13.464095 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:05:13.478826 systemd-logind[1959]: Removed session 26. Nov 5 15:05:14.798916 kubelet[3405]: E1105 15:05:14.798843 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:05:15.804615 containerd[1976]: time="2025-11-05T15:05:15.804568755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:05:16.122790 containerd[1976]: time="2025-11-05T15:05:16.122477856Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:16.124954 containerd[1976]: time="2025-11-05T15:05:16.124739232Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:05:16.124954 containerd[1976]: time="2025-11-05T15:05:16.124891296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:16.126248 kubelet[3405]: E1105 15:05:16.125439 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:16.126248 kubelet[3405]: E1105 15:05:16.125513 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:16.126248 kubelet[3405]: E1105 15:05:16.125745 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jvjfq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-jcc47_calico-system(7ef28263-0ce9-4955-869b-6ae38808f23b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:16.127255 kubelet[3405]: E1105 15:05:16.127038 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:05:17.801179 kubelet[3405]: E1105 15:05:17.801046 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:05:18.798776 containerd[1976]: time="2025-11-05T15:05:18.798722909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:05:19.070768 containerd[1976]: time="2025-11-05T15:05:19.070133595Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:19.072492 containerd[1976]: time="2025-11-05T15:05:19.072311859Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:05:19.072492 containerd[1976]: time="2025-11-05T15:05:19.072436959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:05:19.072702 kubelet[3405]: E1105 15:05:19.072640 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:19.073243 kubelet[3405]: E1105 15:05:19.072699 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:19.073243 kubelet[3405]: E1105 15:05:19.072840 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6788880d9504b6b9eeaa6b75dbe9332,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:19.075384 containerd[1976]: time="2025-11-05T15:05:19.075318651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:05:19.380831 containerd[1976]: time="2025-11-05T15:05:19.380255140Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:19.382661 containerd[1976]: time="2025-11-05T15:05:19.382586188Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:05:19.382793 containerd[1976]: time="2025-11-05T15:05:19.382741384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:19.382962 kubelet[3405]: E1105 15:05:19.382910 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:19.383045 kubelet[3405]: E1105 15:05:19.382971 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:19.383290 kubelet[3405]: E1105 15:05:19.383136 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fw6st,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5895d64bd-fm899_calico-system(c36dc63e-c060-4c77-b41d-1b4d1b676e6a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:19.384575 kubelet[3405]: E1105 15:05:19.384497 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:05:22.798479 kubelet[3405]: E1105 15:05:22.798283 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e" Nov 5 15:05:22.800451 containerd[1976]: time="2025-11-05T15:05:22.800347977Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:05:23.148144 containerd[1976]: time="2025-11-05T15:05:23.148056787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:23.150303 containerd[1976]: time="2025-11-05T15:05:23.150236599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:05:23.150475 containerd[1976]: time="2025-11-05T15:05:23.150382771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:23.150807 kubelet[3405]: E1105 15:05:23.150712 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:23.150956 kubelet[3405]: E1105 15:05:23.150814 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:23.151263 kubelet[3405]: E1105 15:05:23.151123 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sqssh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-784689597c-cgw5w_calico-system(10c8b7bb-d826-4668-8911-b97ba8246d4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:23.152471 kubelet[3405]: E1105 15:05:23.152395 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:05:24.371447 containerd[1976]: time="2025-11-05T15:05:24.371279241Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5e837d75b3d7a845060423813e7bed3562e3fc97b16cf65934ccf4f4470e35cb\" id:\"c1cd169ab45e8e7d14697783a7b8b7a70fcbd4978e8c158ec323e3298ed5a94c\" pid:5839 exited_at:{seconds:1762355124 nanos:370857573}" Nov 5 15:05:26.834633 kubelet[3405]: E1105 15:05:26.834399 3405 controller.go:195] "Failed to update lease" err="Put \"https://172.31.21.83:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-21-83?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 15:05:27.438056 systemd[1]: cri-containerd-be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74.scope: Deactivated successfully. Nov 5 15:05:27.439525 systemd[1]: cri-containerd-be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74.scope: Consumed 28.481s CPU time, 96.6M memory peak. Nov 5 15:05:27.444054 containerd[1976]: time="2025-11-05T15:05:27.443970696Z" level=info msg="TaskExit event in podsandbox handler container_id:\"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\" id:\"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\" pid:3726 exit_status:1 exited_at:{seconds:1762355127 nanos:443496408}" Nov 5 15:05:27.445551 containerd[1976]: time="2025-11-05T15:05:27.444122676Z" level=info msg="received exit event container_id:\"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\" id:\"be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74\" pid:3726 exit_status:1 exited_at:{seconds:1762355127 nanos:443496408}" Nov 5 15:05:27.483900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74-rootfs.mount: Deactivated successfully. Nov 5 15:05:27.703678 kubelet[3405]: I1105 15:05:27.702511 3405 scope.go:117] "RemoveContainer" containerID="be7013d0d444d488e47c8ebcf20fdf616024d2852f775e3f63760215d8fcda74" Nov 5 15:05:27.707140 containerd[1976]: time="2025-11-05T15:05:27.707081606Z" level=info msg="CreateContainer within sandbox \"c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 5 15:05:27.729540 containerd[1976]: time="2025-11-05T15:05:27.729213170Z" level=info msg="Container 5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:05:27.757500 containerd[1976]: time="2025-11-05T15:05:27.756541490Z" level=info msg="CreateContainer within sandbox \"c48afcc1849f9bc57986eab2aaba1aff02905f227237bb1d7f2489f157b0b58a\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124\"" Nov 5 15:05:27.758279 containerd[1976]: time="2025-11-05T15:05:27.758214494Z" level=info msg="StartContainer for \"5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124\"" Nov 5 15:05:27.761686 containerd[1976]: time="2025-11-05T15:05:27.761543690Z" level=info msg="connecting to shim 5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124" address="unix:///run/containerd/s/5e16e146cce79b87822bd91974cb906fbc79433c580254e1da6a3b9fc7246508" protocol=ttrpc version=3 Nov 5 15:05:27.804746 containerd[1976]: time="2025-11-05T15:05:27.803688278Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:27.811500 systemd[1]: Started cri-containerd-5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124.scope - libcontainer container 5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124. Nov 5 15:05:27.879283 containerd[1976]: time="2025-11-05T15:05:27.879218282Z" level=info msg="StartContainer for \"5267b5f8ff33987a27ea5617f36dbdda6a83a63602aea6d327f014071c097124\" returns successfully" Nov 5 15:05:28.155458 containerd[1976]: time="2025-11-05T15:05:28.155376816Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:28.157547 containerd[1976]: time="2025-11-05T15:05:28.157482828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:28.157669 containerd[1976]: time="2025-11-05T15:05:28.157623864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:28.158029 kubelet[3405]: E1105 15:05:28.157944 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:28.158603 kubelet[3405]: E1105 15:05:28.158042 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:28.158603 kubelet[3405]: E1105 15:05:28.158436 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4jg9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-bnfj9_calico-apiserver(8ec6fd17-e646-449e-8324-b2210e743bb4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:28.159742 kubelet[3405]: E1105 15:05:28.159668 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-bnfj9" podUID="8ec6fd17-e646-449e-8324-b2210e743bb4" Nov 5 15:05:28.314389 systemd[1]: cri-containerd-203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1.scope: Deactivated successfully. Nov 5 15:05:28.314960 systemd[1]: cri-containerd-203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1.scope: Consumed 6.902s CPU time, 57.8M memory peak, 204K read from disk. Nov 5 15:05:28.318886 containerd[1976]: time="2025-11-05T15:05:28.318805981Z" level=info msg="TaskExit event in podsandbox handler container_id:\"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\" id:\"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\" pid:3146 exit_status:1 exited_at:{seconds:1762355128 nanos:318309589}" Nov 5 15:05:28.319048 containerd[1976]: time="2025-11-05T15:05:28.318930229Z" level=info msg="received exit event container_id:\"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\" id:\"203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1\" pid:3146 exit_status:1 exited_at:{seconds:1762355128 nanos:318309589}" Nov 5 15:05:28.364799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1-rootfs.mount: Deactivated successfully. Nov 5 15:05:28.714653 kubelet[3405]: I1105 15:05:28.714290 3405 scope.go:117] "RemoveContainer" containerID="203a65cae9fa3dfbeb933fc9697226f101f24890861490f6bf77d9862c2f58a1" Nov 5 15:05:28.718694 containerd[1976]: time="2025-11-05T15:05:28.718068867Z" level=info msg="CreateContainer within sandbox \"a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 5 15:05:28.739563 containerd[1976]: time="2025-11-05T15:05:28.739506267Z" level=info msg="Container 38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:05:28.761480 containerd[1976]: time="2025-11-05T15:05:28.761310375Z" level=info msg="CreateContainer within sandbox \"a35595f4aa5ac34889f309f805418ee80cddbb56147ef0350561d02ac577f151\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a\"" Nov 5 15:05:28.763188 containerd[1976]: time="2025-11-05T15:05:28.762347331Z" level=info msg="StartContainer for \"38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a\"" Nov 5 15:05:28.764746 containerd[1976]: time="2025-11-05T15:05:28.764700207Z" level=info msg="connecting to shim 38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a" address="unix:///run/containerd/s/ecb993a7399f33ea24dc8437ce4633372c5f2df1aa4f13d594b1f90f0267fb28" protocol=ttrpc version=3 Nov 5 15:05:28.811468 systemd[1]: Started cri-containerd-38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a.scope - libcontainer container 38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a. Nov 5 15:05:28.897817 containerd[1976]: time="2025-11-05T15:05:28.897762928Z" level=info msg="StartContainer for \"38cd678577b6e6a27e8afb00b7e2077daa677a1508c797a365ec93c9aedf423a\" returns successfully" Nov 5 15:05:29.807116 kubelet[3405]: E1105 15:05:29.806941 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5895d64bd-fm899" podUID="c36dc63e-c060-4c77-b41d-1b4d1b676e6a" Nov 5 15:05:31.801659 kubelet[3405]: E1105 15:05:31.801592 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-jcc47" podUID="7ef28263-0ce9-4955-869b-6ae38808f23b" Nov 5 15:05:31.804395 containerd[1976]: time="2025-11-05T15:05:31.802574526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:05:31.965555 systemd[1]: cri-containerd-a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45.scope: Deactivated successfully. Nov 5 15:05:31.966099 systemd[1]: cri-containerd-a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45.scope: Consumed 5.010s CPU time, 21.3M memory peak, 388K read from disk. Nov 5 15:05:31.974332 containerd[1976]: time="2025-11-05T15:05:31.974106379Z" level=info msg="received exit event container_id:\"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\" id:\"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\" pid:3158 exit_status:1 exited_at:{seconds:1762355131 nanos:972489691}" Nov 5 15:05:31.975312 containerd[1976]: time="2025-11-05T15:05:31.974513287Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\" id:\"a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45\" pid:3158 exit_status:1 exited_at:{seconds:1762355131 nanos:972489691}" Nov 5 15:05:32.035843 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45-rootfs.mount: Deactivated successfully. Nov 5 15:05:32.100752 containerd[1976]: time="2025-11-05T15:05:32.100598847Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:32.102795 containerd[1976]: time="2025-11-05T15:05:32.102723687Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:05:32.102946 containerd[1976]: time="2025-11-05T15:05:32.102855279Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:05:32.103239 kubelet[3405]: E1105 15:05:32.103181 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:32.103373 kubelet[3405]: E1105 15:05:32.103248 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:32.103494 kubelet[3405]: E1105 15:05:32.103412 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:32.106568 containerd[1976]: time="2025-11-05T15:05:32.106514091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:05:32.583532 containerd[1976]: time="2025-11-05T15:05:32.583187058Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:32.585459 containerd[1976]: time="2025-11-05T15:05:32.585268002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:05:32.585459 containerd[1976]: time="2025-11-05T15:05:32.585402054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:05:32.586060 kubelet[3405]: E1105 15:05:32.585720 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:32.586519 kubelet[3405]: E1105 15:05:32.586221 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:32.586519 kubelet[3405]: E1105 15:05:32.586440 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k7pv5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-qgwk8_calico-system(ef9a0063-5427-4eaf-b6d6-01cd9334db4b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:32.588239 kubelet[3405]: E1105 15:05:32.588124 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-qgwk8" podUID="ef9a0063-5427-4eaf-b6d6-01cd9334db4b" Nov 5 15:05:32.745051 kubelet[3405]: I1105 15:05:32.744695 3405 scope.go:117] "RemoveContainer" containerID="a012f939d59cc3226a01dd51a8df61cd578656e3e9c267fa74395f92c6899d45" Nov 5 15:05:32.748195 containerd[1976]: time="2025-11-05T15:05:32.747739255Z" level=info msg="CreateContainer within sandbox \"113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 5 15:05:32.775496 containerd[1976]: time="2025-11-05T15:05:32.775437583Z" level=info msg="Container 1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:05:32.784835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267971697.mount: Deactivated successfully. Nov 5 15:05:32.800817 containerd[1976]: time="2025-11-05T15:05:32.800751103Z" level=info msg="CreateContainer within sandbox \"113422c324ae179c8f61c0e4e2f9d96f210745edfb03c2b3a113e43ac0de628d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca\"" Nov 5 15:05:32.803227 containerd[1976]: time="2025-11-05T15:05:32.801730459Z" level=info msg="StartContainer for \"1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca\"" Nov 5 15:05:32.803932 containerd[1976]: time="2025-11-05T15:05:32.803855851Z" level=info msg="connecting to shim 1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca" address="unix:///run/containerd/s/3d91da675184255c27abeaa8fae8238b17839cf8fcd3249aedfb39cdf4aa5254" protocol=ttrpc version=3 Nov 5 15:05:32.849461 systemd[1]: Started cri-containerd-1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca.scope - libcontainer container 1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca. Nov 5 15:05:32.929010 containerd[1976]: time="2025-11-05T15:05:32.928916204Z" level=info msg="StartContainer for \"1a29c6e3c52cdb97146a0cd428078d499dd801ebb62a999e7d8e4dad4d123aca\" returns successfully" Nov 5 15:05:36.798570 containerd[1976]: time="2025-11-05T15:05:36.798341063Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:36.799134 kubelet[3405]: E1105 15:05:36.798427 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-784689597c-cgw5w" podUID="10c8b7bb-d826-4668-8911-b97ba8246d4b" Nov 5 15:05:36.836621 kubelet[3405]: E1105 15:05:36.836515 3405 request.go:1332] Unexpected error when reading response body: net/http: request canceled (Client.Timeout or context cancellation while reading body) Nov 5 15:05:36.836798 kubelet[3405]: E1105 15:05:36.836688 3405 controller.go:195] "Failed to update lease" err="unexpected error when reading response body. Please retry. Original error: net/http: request canceled (Client.Timeout or context cancellation while reading body)" Nov 5 15:05:37.119541 containerd[1976]: time="2025-11-05T15:05:37.119370836Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:37.121619 containerd[1976]: time="2025-11-05T15:05:37.121544132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:37.122085 containerd[1976]: time="2025-11-05T15:05:37.121563752Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:37.122204 kubelet[3405]: E1105 15:05:37.121928 3405 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:37.122204 kubelet[3405]: E1105 15:05:37.121985 3405 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:37.122608 kubelet[3405]: E1105 15:05:37.122535 3405 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-llq2s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7bb8dc7b97-7jq9b_calico-apiserver(8d9369e7-33c5-42a2-b295-6e6f5445630e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:37.124035 kubelet[3405]: E1105 15:05:37.123948 3405 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7bb8dc7b97-7jq9b" podUID="8d9369e7-33c5-42a2-b295-6e6f5445630e"