Nov 5 15:02:52.440184 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Nov 5 15:02:52.442762 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 15:02:52.442791 kernel: KASLR disabled due to lack of seed Nov 5 15:02:52.442809 kernel: efi: EFI v2.7 by EDK II Nov 5 15:02:52.442826 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a731a98 MEMRESERVE=0x78551598 Nov 5 15:02:52.442841 kernel: secureboot: Secure boot disabled Nov 5 15:02:52.442860 kernel: ACPI: Early table checksum verification disabled Nov 5 15:02:52.442876 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Nov 5 15:02:52.442892 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Nov 5 15:02:52.442911 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Nov 5 15:02:52.442927 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Nov 5 15:02:52.442942 kernel: ACPI: FACS 0x0000000078630000 000040 Nov 5 15:02:52.442958 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Nov 5 15:02:52.442973 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Nov 5 15:02:52.442996 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Nov 5 15:02:52.443013 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Nov 5 15:02:52.443030 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Nov 5 15:02:52.443046 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Nov 5 15:02:52.443063 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Nov 5 15:02:52.443079 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Nov 5 15:02:52.443095 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Nov 5 15:02:52.443111 kernel: printk: legacy bootconsole [uart0] enabled Nov 5 15:02:52.443128 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 15:02:52.443144 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 15:02:52.443165 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Nov 5 15:02:52.443181 kernel: Zone ranges: Nov 5 15:02:52.443197 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Nov 5 15:02:52.443240 kernel: DMA32 empty Nov 5 15:02:52.443258 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Nov 5 15:02:52.443275 kernel: Device empty Nov 5 15:02:52.443292 kernel: Movable zone start for each node Nov 5 15:02:52.443309 kernel: Early memory node ranges Nov 5 15:02:52.443325 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Nov 5 15:02:52.443342 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Nov 5 15:02:52.443358 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Nov 5 15:02:52.443374 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Nov 5 15:02:52.443396 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Nov 5 15:02:52.443413 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Nov 5 15:02:52.443429 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Nov 5 15:02:52.443446 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Nov 5 15:02:52.443469 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Nov 5 15:02:52.443490 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Nov 5 15:02:52.443508 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Nov 5 15:02:52.443525 kernel: psci: probing for conduit method from ACPI. Nov 5 15:02:52.443542 kernel: psci: PSCIv1.0 detected in firmware. Nov 5 15:02:52.443559 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 15:02:52.443577 kernel: psci: Trusted OS migration not required Nov 5 15:02:52.443594 kernel: psci: SMC Calling Convention v1.1 Nov 5 15:02:52.443612 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Nov 5 15:02:52.443629 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 15:02:52.443650 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 15:02:52.443668 kernel: pcpu-alloc: [0] 0 [0] 1 Nov 5 15:02:52.443686 kernel: Detected PIPT I-cache on CPU0 Nov 5 15:02:52.443703 kernel: CPU features: detected: GIC system register CPU interface Nov 5 15:02:52.443720 kernel: CPU features: detected: Spectre-v2 Nov 5 15:02:52.443737 kernel: CPU features: detected: Spectre-v3a Nov 5 15:02:52.443755 kernel: CPU features: detected: Spectre-BHB Nov 5 15:02:52.443772 kernel: CPU features: detected: ARM erratum 1742098 Nov 5 15:02:52.443789 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Nov 5 15:02:52.443807 kernel: alternatives: applying boot alternatives Nov 5 15:02:52.443826 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:02:52.443849 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 15:02:52.443866 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 15:02:52.443883 kernel: Fallback order for Node 0: 0 Nov 5 15:02:52.443901 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Nov 5 15:02:52.443918 kernel: Policy zone: Normal Nov 5 15:02:52.443935 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 15:02:52.443952 kernel: software IO TLB: area num 2. Nov 5 15:02:52.443970 kernel: software IO TLB: mapped [mem 0x000000006f800000-0x0000000073800000] (64MB) Nov 5 15:02:52.443987 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 5 15:02:52.444005 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 15:02:52.444027 kernel: rcu: RCU event tracing is enabled. Nov 5 15:02:52.444044 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 5 15:02:52.444062 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 15:02:52.444080 kernel: Tracing variant of Tasks RCU enabled. Nov 5 15:02:52.444097 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 15:02:52.444115 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 5 15:02:52.444133 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:02:52.444150 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 5 15:02:52.444167 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 15:02:52.444185 kernel: GICv3: 96 SPIs implemented Nov 5 15:02:52.444202 kernel: GICv3: 0 Extended SPIs implemented Nov 5 15:02:52.445274 kernel: Root IRQ handler: gic_handle_irq Nov 5 15:02:52.445293 kernel: GICv3: GICv3 features: 16 PPIs Nov 5 15:02:52.445311 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 15:02:52.445329 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Nov 5 15:02:52.445346 kernel: ITS [mem 0x10080000-0x1009ffff] Nov 5 15:02:52.445363 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Nov 5 15:02:52.445381 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Nov 5 15:02:52.445399 kernel: GICv3: using LPI property table @0x0000000400110000 Nov 5 15:02:52.445416 kernel: ITS: Using hypervisor restricted LPI range [128] Nov 5 15:02:52.445434 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Nov 5 15:02:52.445451 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 15:02:52.445472 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Nov 5 15:02:52.445490 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Nov 5 15:02:52.445507 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Nov 5 15:02:52.445525 kernel: Console: colour dummy device 80x25 Nov 5 15:02:52.445544 kernel: printk: legacy console [tty1] enabled Nov 5 15:02:52.445563 kernel: ACPI: Core revision 20240827 Nov 5 15:02:52.445581 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Nov 5 15:02:52.445600 kernel: pid_max: default: 32768 minimum: 301 Nov 5 15:02:52.445622 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 15:02:52.445640 kernel: landlock: Up and running. Nov 5 15:02:52.445658 kernel: SELinux: Initializing. Nov 5 15:02:52.445676 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:02:52.445694 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 15:02:52.445712 kernel: rcu: Hierarchical SRCU implementation. Nov 5 15:02:52.445730 kernel: rcu: Max phase no-delay instances is 400. Nov 5 15:02:52.445749 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 15:02:52.445770 kernel: Remapping and enabling EFI services. Nov 5 15:02:52.445788 kernel: smp: Bringing up secondary CPUs ... Nov 5 15:02:52.445806 kernel: Detected PIPT I-cache on CPU1 Nov 5 15:02:52.445824 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Nov 5 15:02:52.445842 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Nov 5 15:02:52.445860 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Nov 5 15:02:52.445879 kernel: smp: Brought up 1 node, 2 CPUs Nov 5 15:02:52.445900 kernel: SMP: Total of 2 processors activated. Nov 5 15:02:52.445918 kernel: CPU: All CPU(s) started at EL1 Nov 5 15:02:52.445946 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 15:02:52.445968 kernel: CPU features: detected: 32-bit EL1 Support Nov 5 15:02:52.445987 kernel: CPU features: detected: CRC32 instructions Nov 5 15:02:52.446005 kernel: alternatives: applying system-wide alternatives Nov 5 15:02:52.446025 kernel: Memory: 3822956K/4030464K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 186164K reserved, 16384K cma-reserved) Nov 5 15:02:52.446044 kernel: devtmpfs: initialized Nov 5 15:02:52.446067 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 15:02:52.446086 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 5 15:02:52.446105 kernel: 23536 pages in range for non-PLT usage Nov 5 15:02:52.446124 kernel: 515056 pages in range for PLT usage Nov 5 15:02:52.446142 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 15:02:52.446183 kernel: SMBIOS 3.0.0 present. Nov 5 15:02:52.446219 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Nov 5 15:02:52.446245 kernel: DMI: Memory slots populated: 0/0 Nov 5 15:02:52.446264 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 15:02:52.446284 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 15:02:52.446303 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 15:02:52.446321 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 15:02:52.446346 kernel: audit: initializing netlink subsys (disabled) Nov 5 15:02:52.446365 kernel: audit: type=2000 audit(0.225:1): state=initialized audit_enabled=0 res=1 Nov 5 15:02:52.446383 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 15:02:52.446402 kernel: cpuidle: using governor menu Nov 5 15:02:52.446420 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 15:02:52.446439 kernel: ASID allocator initialised with 65536 entries Nov 5 15:02:52.446458 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 15:02:52.446481 kernel: Serial: AMBA PL011 UART driver Nov 5 15:02:52.446499 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 15:02:52.446518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 15:02:52.446537 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 15:02:52.446556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 15:02:52.446575 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 15:02:52.446595 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 15:02:52.446618 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 15:02:52.446638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 15:02:52.446656 kernel: ACPI: Added _OSI(Module Device) Nov 5 15:02:52.446676 kernel: ACPI: Added _OSI(Processor Device) Nov 5 15:02:52.446742 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 15:02:52.447139 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 15:02:52.449332 kernel: ACPI: Interpreter enabled Nov 5 15:02:52.449363 kernel: ACPI: Using GIC for interrupt routing Nov 5 15:02:52.449384 kernel: ACPI: MCFG table detected, 1 entries Nov 5 15:02:52.449404 kernel: ACPI: CPU0 has been hot-added Nov 5 15:02:52.449422 kernel: ACPI: CPU1 has been hot-added Nov 5 15:02:52.449441 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Nov 5 15:02:52.449806 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 15:02:52.450064 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 15:02:52.451420 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 15:02:52.451701 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Nov 5 15:02:52.451955 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Nov 5 15:02:52.451984 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Nov 5 15:02:52.452005 kernel: acpiphp: Slot [1] registered Nov 5 15:02:52.452026 kernel: acpiphp: Slot [2] registered Nov 5 15:02:52.452053 kernel: acpiphp: Slot [3] registered Nov 5 15:02:52.452073 kernel: acpiphp: Slot [4] registered Nov 5 15:02:52.452092 kernel: acpiphp: Slot [5] registered Nov 5 15:02:52.452112 kernel: acpiphp: Slot [6] registered Nov 5 15:02:52.452130 kernel: acpiphp: Slot [7] registered Nov 5 15:02:52.452149 kernel: acpiphp: Slot [8] registered Nov 5 15:02:52.452168 kernel: acpiphp: Slot [9] registered Nov 5 15:02:52.452187 kernel: acpiphp: Slot [10] registered Nov 5 15:02:52.454248 kernel: acpiphp: Slot [11] registered Nov 5 15:02:52.454279 kernel: acpiphp: Slot [12] registered Nov 5 15:02:52.454299 kernel: acpiphp: Slot [13] registered Nov 5 15:02:52.454318 kernel: acpiphp: Slot [14] registered Nov 5 15:02:52.454337 kernel: acpiphp: Slot [15] registered Nov 5 15:02:52.454355 kernel: acpiphp: Slot [16] registered Nov 5 15:02:52.454374 kernel: acpiphp: Slot [17] registered Nov 5 15:02:52.454401 kernel: acpiphp: Slot [18] registered Nov 5 15:02:52.454420 kernel: acpiphp: Slot [19] registered Nov 5 15:02:52.454439 kernel: acpiphp: Slot [20] registered Nov 5 15:02:52.454457 kernel: acpiphp: Slot [21] registered Nov 5 15:02:52.454476 kernel: acpiphp: Slot [22] registered Nov 5 15:02:52.454495 kernel: acpiphp: Slot [23] registered Nov 5 15:02:52.454513 kernel: acpiphp: Slot [24] registered Nov 5 15:02:52.454536 kernel: acpiphp: Slot [25] registered Nov 5 15:02:52.454554 kernel: acpiphp: Slot [26] registered Nov 5 15:02:52.454573 kernel: acpiphp: Slot [27] registered Nov 5 15:02:52.454591 kernel: acpiphp: Slot [28] registered Nov 5 15:02:52.454610 kernel: acpiphp: Slot [29] registered Nov 5 15:02:52.454629 kernel: acpiphp: Slot [30] registered Nov 5 15:02:52.454647 kernel: acpiphp: Slot [31] registered Nov 5 15:02:52.454666 kernel: PCI host bridge to bus 0000:00 Nov 5 15:02:52.454970 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Nov 5 15:02:52.455220 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 15:02:52.455463 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Nov 5 15:02:52.455690 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Nov 5 15:02:52.455979 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Nov 5 15:02:52.457489 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Nov 5 15:02:52.457794 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Nov 5 15:02:52.458068 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Nov 5 15:02:52.458381 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Nov 5 15:02:52.458643 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:02:52.458933 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Nov 5 15:02:52.462502 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Nov 5 15:02:52.462833 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Nov 5 15:02:52.463086 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Nov 5 15:02:52.470637 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Nov 5 15:02:52.470913 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref]: assigned Nov 5 15:02:52.471172 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff]: assigned Nov 5 15:02:52.471470 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80110000-0x80113fff]: assigned Nov 5 15:02:52.471721 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80114000-0x80117fff]: assigned Nov 5 15:02:52.471981 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff]: assigned Nov 5 15:02:52.472248 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Nov 5 15:02:52.472482 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 15:02:52.472714 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Nov 5 15:02:52.472741 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 15:02:52.472761 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 15:02:52.472780 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 15:02:52.472800 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 15:02:52.472819 kernel: iommu: Default domain type: Translated Nov 5 15:02:52.472838 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 15:02:52.472861 kernel: efivars: Registered efivars operations Nov 5 15:02:52.472880 kernel: vgaarb: loaded Nov 5 15:02:52.472898 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 15:02:52.472917 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 15:02:52.472936 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 15:02:52.472955 kernel: pnp: PnP ACPI init Nov 5 15:02:52.473245 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Nov 5 15:02:52.473280 kernel: pnp: PnP ACPI: found 1 devices Nov 5 15:02:52.473299 kernel: NET: Registered PF_INET protocol family Nov 5 15:02:52.473318 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 15:02:52.473337 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 15:02:52.473356 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 15:02:52.473375 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 15:02:52.473394 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 15:02:52.473417 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 15:02:52.473436 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:02:52.473455 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 15:02:52.473474 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 15:02:52.473492 kernel: PCI: CLS 0 bytes, default 64 Nov 5 15:02:52.473511 kernel: kvm [1]: HYP mode not available Nov 5 15:02:52.473530 kernel: Initialise system trusted keyrings Nov 5 15:02:52.473552 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 15:02:52.473571 kernel: Key type asymmetric registered Nov 5 15:02:52.473589 kernel: Asymmetric key parser 'x509' registered Nov 5 15:02:52.473608 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 15:02:52.473627 kernel: io scheduler mq-deadline registered Nov 5 15:02:52.473646 kernel: io scheduler kyber registered Nov 5 15:02:52.473665 kernel: io scheduler bfq registered Nov 5 15:02:52.473940 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Nov 5 15:02:52.473968 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 15:02:52.473988 kernel: ACPI: button: Power Button [PWRB] Nov 5 15:02:52.474007 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Nov 5 15:02:52.474026 kernel: ACPI: button: Sleep Button [SLPB] Nov 5 15:02:52.474046 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 15:02:52.474070 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Nov 5 15:02:52.474368 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Nov 5 15:02:52.474397 kernel: printk: legacy console [ttyS0] disabled Nov 5 15:02:52.474417 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Nov 5 15:02:52.474437 kernel: printk: legacy console [ttyS0] enabled Nov 5 15:02:52.474456 kernel: printk: legacy bootconsole [uart0] disabled Nov 5 15:02:52.474476 kernel: thunder_xcv, ver 1.0 Nov 5 15:02:52.474501 kernel: thunder_bgx, ver 1.0 Nov 5 15:02:52.474522 kernel: nicpf, ver 1.0 Nov 5 15:02:52.474540 kernel: nicvf, ver 1.0 Nov 5 15:02:52.474822 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 15:02:52.475116 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T15:02:48 UTC (1762354968) Nov 5 15:02:52.475145 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 15:02:52.475166 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Nov 5 15:02:52.475192 kernel: NET: Registered PF_INET6 protocol family Nov 5 15:02:52.475241 kernel: watchdog: NMI not fully supported Nov 5 15:02:52.475263 kernel: watchdog: Hard watchdog permanently disabled Nov 5 15:02:52.475282 kernel: Segment Routing with IPv6 Nov 5 15:02:52.475301 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 15:02:52.475320 kernel: NET: Registered PF_PACKET protocol family Nov 5 15:02:52.475339 kernel: Key type dns_resolver registered Nov 5 15:02:52.475963 kernel: registered taskstats version 1 Nov 5 15:02:52.475987 kernel: Loading compiled-in X.509 certificates Nov 5 15:02:52.476007 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 15:02:52.476026 kernel: Demotion targets for Node 0: null Nov 5 15:02:52.476046 kernel: Key type .fscrypt registered Nov 5 15:02:52.476065 kernel: Key type fscrypt-provisioning registered Nov 5 15:02:52.476084 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 15:02:52.476110 kernel: ima: Allocated hash algorithm: sha1 Nov 5 15:02:52.476130 kernel: ima: No architecture policies found Nov 5 15:02:52.476149 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 15:02:52.476168 kernel: clk: Disabling unused clocks Nov 5 15:02:52.476187 kernel: PM: genpd: Disabling unused power domains Nov 5 15:02:52.476315 kernel: Freeing unused kernel memory: 12992K Nov 5 15:02:52.476340 kernel: Run /init as init process Nov 5 15:02:52.476365 kernel: with arguments: Nov 5 15:02:52.476384 kernel: /init Nov 5 15:02:52.476402 kernel: with environment: Nov 5 15:02:52.476420 kernel: HOME=/ Nov 5 15:02:52.476440 kernel: TERM=linux Nov 5 15:02:52.476459 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Nov 5 15:02:52.476707 kernel: nvme nvme0: pci function 0000:00:04.0 Nov 5 15:02:52.476925 kernel: nvme nvme0: 2/0/0 default/read/poll queues Nov 5 15:02:52.476955 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 15:02:52.476975 kernel: GPT:25804799 != 33554431 Nov 5 15:02:52.476993 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 15:02:52.477011 kernel: GPT:25804799 != 33554431 Nov 5 15:02:52.477030 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 15:02:52.477048 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Nov 5 15:02:52.477073 kernel: SCSI subsystem initialized Nov 5 15:02:52.477092 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 15:02:52.477111 kernel: device-mapper: uevent: version 1.0.3 Nov 5 15:02:52.477131 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 15:02:52.477151 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 15:02:52.477170 kernel: raid6: neonx8 gen() 6512 MB/s Nov 5 15:02:52.477190 kernel: raid6: neonx4 gen() 6514 MB/s Nov 5 15:02:52.477260 kernel: raid6: neonx2 gen() 5447 MB/s Nov 5 15:02:52.477283 kernel: raid6: neonx1 gen() 3955 MB/s Nov 5 15:02:52.477303 kernel: raid6: int64x8 gen() 3625 MB/s Nov 5 15:02:52.477322 kernel: raid6: int64x4 gen() 3723 MB/s Nov 5 15:02:52.477341 kernel: raid6: int64x2 gen() 3593 MB/s Nov 5 15:02:52.477360 kernel: raid6: int64x1 gen() 2771 MB/s Nov 5 15:02:52.477379 kernel: raid6: using algorithm neonx4 gen() 6514 MB/s Nov 5 15:02:52.477404 kernel: raid6: .... xor() 4917 MB/s, rmw enabled Nov 5 15:02:52.477423 kernel: raid6: using neon recovery algorithm Nov 5 15:02:52.477442 kernel: xor: measuring software checksum speed Nov 5 15:02:52.477460 kernel: 8regs : 12917 MB/sec Nov 5 15:02:52.477479 kernel: 32regs : 12428 MB/sec Nov 5 15:02:52.477498 kernel: arm64_neon : 8622 MB/sec Nov 5 15:02:52.477517 kernel: xor: using function: 8regs (12917 MB/sec) Nov 5 15:02:52.477539 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 15:02:52.477559 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (254:0) scanned by mount (221) Nov 5 15:02:52.477579 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 15:02:52.477598 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:02:52.477617 kernel: BTRFS info (device dm-0): enabling ssd optimizations Nov 5 15:02:52.477636 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 15:02:52.477655 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 15:02:52.477678 kernel: loop: module loaded Nov 5 15:02:52.477697 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 15:02:52.477716 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 15:02:52.477738 systemd[1]: Successfully made /usr/ read-only. Nov 5 15:02:52.477763 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:02:52.477786 systemd[1]: Detected virtualization amazon. Nov 5 15:02:52.477810 systemd[1]: Detected architecture arm64. Nov 5 15:02:52.477830 systemd[1]: Running in initrd. Nov 5 15:02:52.477850 systemd[1]: No hostname configured, using default hostname. Nov 5 15:02:52.477871 systemd[1]: Hostname set to . Nov 5 15:02:52.477891 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:02:52.477912 systemd[1]: Queued start job for default target initrd.target. Nov 5 15:02:52.477947 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:02:52.477972 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:02:52.477994 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:02:52.478017 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 15:02:52.478038 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:02:52.478064 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 15:02:52.478086 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 15:02:52.478108 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:02:52.478129 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:02:52.478168 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:02:52.478195 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:02:52.478254 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:02:52.478277 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:02:52.478298 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:02:52.478319 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:02:52.478341 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:02:52.478363 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 15:02:52.478385 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 15:02:52.478412 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:02:52.478433 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:02:52.478455 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:02:52.478476 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:02:52.478497 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 15:02:52.478519 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 15:02:52.478541 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:02:52.478566 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 15:02:52.478589 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 15:02:52.478611 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 15:02:52.478638 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:02:52.478665 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:02:52.478688 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:02:52.478712 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 15:02:52.478739 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:02:52.478762 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:02:52.478785 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 15:02:52.478871 systemd-journald[358]: Collecting audit messages is disabled. Nov 5 15:02:52.478922 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:02:52.478945 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:02:52.478968 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 15:02:52.478989 systemd-journald[358]: Journal started Nov 5 15:02:52.479026 systemd-journald[358]: Runtime Journal (/run/log/journal/ec234947078f3b16a1e07e9d7975cad8) is 8M, max 75.3M, 67.3M free. Nov 5 15:02:52.484265 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:02:52.488378 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:02:52.522588 kernel: Bridge firewalling registered Nov 5 15:02:52.519383 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:02:52.521132 systemd-modules-load[359]: Inserted module 'br_netfilter' Nov 5 15:02:52.527038 systemd-tmpfiles[375]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 15:02:52.534878 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:02:52.543079 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:02:52.551436 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:02:52.556439 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 15:02:52.564527 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:02:52.600093 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:02:52.609307 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:02:52.617949 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:02:52.638460 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 15:02:52.686341 dracut-cmdline[398]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 15:02:52.815828 systemd-resolved[396]: Positive Trust Anchors: Nov 5 15:02:52.815856 systemd-resolved[396]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:02:52.815864 systemd-resolved[396]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:02:52.815923 systemd-resolved[396]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:02:53.021256 kernel: Loading iSCSI transport class v2.0-870. Nov 5 15:02:53.088252 kernel: random: crng init done Nov 5 15:02:53.088694 systemd-resolved[396]: Defaulting to hostname 'linux'. Nov 5 15:02:53.094443 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:02:53.114405 kernel: iscsi: registered transport (tcp) Nov 5 15:02:53.100158 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:02:53.169565 kernel: iscsi: registered transport (qla4xxx) Nov 5 15:02:53.169651 kernel: QLogic iSCSI HBA Driver Nov 5 15:02:53.208647 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:02:53.230038 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:02:53.233965 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:02:53.317434 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 15:02:53.320534 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 15:02:53.334002 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 15:02:53.394045 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:02:53.401810 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:02:53.461721 systemd-udevd[642]: Using default interface naming scheme 'v257'. Nov 5 15:02:53.483332 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:02:53.492602 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 15:02:53.542680 dracut-pre-trigger[712]: rd.md=0: removing MD RAID activation Nov 5 15:02:53.545189 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:02:53.560372 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:02:53.603661 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:02:53.613142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:02:53.662793 systemd-networkd[750]: lo: Link UP Nov 5 15:02:53.664436 systemd-networkd[750]: lo: Gained carrier Nov 5 15:02:53.665469 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:02:53.669674 systemd[1]: Reached target network.target - Network. Nov 5 15:02:53.774131 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:02:53.789524 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 15:02:53.979137 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:02:53.981595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:02:53.984221 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:02:53.993094 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:02:54.032475 kernel: nvme nvme0: using unchecked data buffer Nov 5 15:02:54.038935 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 15:02:54.039013 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Nov 5 15:02:54.043284 kernel: ena 0000:00:05.0: ENA device version: 0.10 Nov 5 15:02:54.043640 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Nov 5 15:02:54.054767 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:31:9a:88:da:57 Nov 5 15:02:54.057873 (udev-worker)[785]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:02:54.070089 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:02:54.078508 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:02:54.080244 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:02:54.089627 systemd-networkd[750]: eth0: Link UP Nov 5 15:02:54.089937 systemd-networkd[750]: eth0: Gained carrier Nov 5 15:02:54.089959 systemd-networkd[750]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:02:54.106322 systemd-networkd[750]: eth0: DHCPv4 address 172.31.23.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 15:02:54.175264 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Nov 5 15:02:54.201169 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 15:02:54.241359 disk-uuid[866]: Primary Header is updated. Nov 5 15:02:54.241359 disk-uuid[866]: Secondary Entries is updated. Nov 5 15:02:54.241359 disk-uuid[866]: Secondary Header is updated. Nov 5 15:02:54.264756 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Nov 5 15:02:54.329488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Nov 5 15:02:54.376887 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 15:02:54.698082 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 15:02:54.708381 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:02:54.713797 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:02:54.718777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:02:54.724965 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 15:02:54.758322 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:02:55.340450 systemd-networkd[750]: eth0: Gained IPv6LL Nov 5 15:02:55.396656 disk-uuid[872]: Warning: The kernel is still using the old partition table. Nov 5 15:02:55.396656 disk-uuid[872]: The new table will be used at the next reboot or after you Nov 5 15:02:55.396656 disk-uuid[872]: run partprobe(8) or kpartx(8) Nov 5 15:02:55.396656 disk-uuid[872]: The operation has completed successfully. Nov 5 15:02:55.413897 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 15:02:55.414341 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 15:02:55.425442 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 15:02:55.477246 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1094) Nov 5 15:02:55.481159 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:02:55.481226 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:02:55.529800 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:02:55.529870 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:02:55.540252 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:02:55.541291 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 15:02:55.548620 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 15:02:57.016547 ignition[1113]: Ignition 2.22.0 Nov 5 15:02:57.016575 ignition[1113]: Stage: fetch-offline Nov 5 15:02:57.020152 ignition[1113]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:57.020194 ignition[1113]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:57.024639 ignition[1113]: Ignition finished successfully Nov 5 15:02:57.029198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:02:57.036408 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 5 15:02:57.084983 ignition[1121]: Ignition 2.22.0 Nov 5 15:02:57.085013 ignition[1121]: Stage: fetch Nov 5 15:02:57.085563 ignition[1121]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:57.085585 ignition[1121]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:57.087345 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:57.113547 ignition[1121]: PUT result: OK Nov 5 15:02:57.120076 ignition[1121]: parsed url from cmdline: "" Nov 5 15:02:57.120094 ignition[1121]: no config URL provided Nov 5 15:02:57.120109 ignition[1121]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 15:02:57.120414 ignition[1121]: no config at "/usr/lib/ignition/user.ign" Nov 5 15:02:57.120479 ignition[1121]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:57.129883 ignition[1121]: PUT result: OK Nov 5 15:02:57.130338 ignition[1121]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Nov 5 15:02:57.137477 ignition[1121]: GET result: OK Nov 5 15:02:57.138902 ignition[1121]: parsing config with SHA512: ecb3d9f8c69f9404697c307bb1a7aa3e79025f2f89423cc1f392d3c8e2cc343c1922d4dda15ad5c74c95fc4b463e513b4050ce9a9a9fa122be0176857f1c5519 Nov 5 15:02:57.147647 unknown[1121]: fetched base config from "system" Nov 5 15:02:57.148787 ignition[1121]: fetch: fetch complete Nov 5 15:02:57.147680 unknown[1121]: fetched base config from "system" Nov 5 15:02:57.148809 ignition[1121]: fetch: fetch passed Nov 5 15:02:57.147695 unknown[1121]: fetched user config from "aws" Nov 5 15:02:57.148926 ignition[1121]: Ignition finished successfully Nov 5 15:02:57.157951 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 5 15:02:57.171465 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 15:02:57.239692 ignition[1127]: Ignition 2.22.0 Nov 5 15:02:57.240187 ignition[1127]: Stage: kargs Nov 5 15:02:57.240769 ignition[1127]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:57.240790 ignition[1127]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:57.240948 ignition[1127]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:57.247392 ignition[1127]: PUT result: OK Nov 5 15:02:57.254585 ignition[1127]: kargs: kargs passed Nov 5 15:02:57.254936 ignition[1127]: Ignition finished successfully Nov 5 15:02:57.261126 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 15:02:57.262914 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 15:02:57.309064 ignition[1134]: Ignition 2.22.0 Nov 5 15:02:57.309095 ignition[1134]: Stage: disks Nov 5 15:02:57.309650 ignition[1134]: no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:57.309672 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:57.309810 ignition[1134]: PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:57.315805 ignition[1134]: PUT result: OK Nov 5 15:02:57.327312 ignition[1134]: disks: disks passed Nov 5 15:02:57.327440 ignition[1134]: Ignition finished successfully Nov 5 15:02:57.333381 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 15:02:57.336924 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 15:02:57.339945 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 15:02:57.347543 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:02:57.350153 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:02:57.355055 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:02:57.363270 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 15:02:57.489028 systemd-fsck[1143]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Nov 5 15:02:57.495026 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 15:02:57.504576 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 15:02:57.762250 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 15:02:57.764612 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 15:02:57.768283 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 15:02:57.819633 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:02:57.823562 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 15:02:57.828823 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 15:02:57.829880 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 15:02:57.829930 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:02:57.855901 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 15:02:57.862436 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 15:02:57.878266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1162) Nov 5 15:02:57.883510 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:02:57.883577 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:02:57.890631 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:02:57.890686 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:02:57.893413 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:02:58.953503 initrd-setup-root[1186]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 15:02:58.962818 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Nov 5 15:02:58.972257 initrd-setup-root[1200]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 15:02:58.981603 initrd-setup-root[1207]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 15:02:59.610955 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 15:02:59.613999 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 15:02:59.617542 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 15:02:59.648856 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 15:02:59.652857 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:02:59.686276 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 15:02:59.706520 ignition[1275]: INFO : Ignition 2.22.0 Nov 5 15:02:59.706520 ignition[1275]: INFO : Stage: mount Nov 5 15:02:59.711065 ignition[1275]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:59.711065 ignition[1275]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:59.711065 ignition[1275]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:59.719506 ignition[1275]: INFO : PUT result: OK Nov 5 15:02:59.723402 ignition[1275]: INFO : mount: mount passed Nov 5 15:02:59.726497 ignition[1275]: INFO : Ignition finished successfully Nov 5 15:02:59.728575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 15:02:59.738682 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 15:02:59.786736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 15:02:59.826268 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1286) Nov 5 15:02:59.831119 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 15:02:59.831181 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Nov 5 15:02:59.837944 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Nov 5 15:02:59.838016 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Nov 5 15:02:59.841363 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 15:02:59.891336 ignition[1303]: INFO : Ignition 2.22.0 Nov 5 15:02:59.891336 ignition[1303]: INFO : Stage: files Nov 5 15:02:59.895050 ignition[1303]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:02:59.895050 ignition[1303]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:02:59.895050 ignition[1303]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:02:59.902921 ignition[1303]: INFO : PUT result: OK Nov 5 15:02:59.913081 ignition[1303]: DEBUG : files: compiled without relabeling support, skipping Nov 5 15:02:59.916693 ignition[1303]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 15:02:59.916693 ignition[1303]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 15:02:59.956974 ignition[1303]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 15:02:59.960266 ignition[1303]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 15:02:59.963551 unknown[1303]: wrote ssh authorized keys file for user: core Nov 5 15:02:59.966033 ignition[1303]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 15:02:59.972239 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 15:02:59.976501 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 5 15:03:00.069240 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 15:03:00.272267 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 15:03:00.296112 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Nov 5 15:03:00.778553 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 5 15:03:01.171712 ignition[1303]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Nov 5 15:03:01.171712 ignition[1303]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 5 15:03:01.249483 ignition[1303]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:03:01.258581 ignition[1303]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 15:03:01.262857 ignition[1303]: INFO : files: files passed Nov 5 15:03:01.262857 ignition[1303]: INFO : Ignition finished successfully Nov 5 15:03:01.285726 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 15:03:01.290187 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 15:03:01.298783 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 15:03:01.318688 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 15:03:01.322054 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 15:03:01.340681 initrd-setup-root-after-ignition[1335]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:03:01.340681 initrd-setup-root-after-ignition[1335]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:03:01.348056 initrd-setup-root-after-ignition[1339]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 15:03:01.353495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:03:01.357397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 15:03:01.366522 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 15:03:01.461790 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 15:03:01.462066 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 15:03:01.467369 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 15:03:01.469802 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 15:03:01.475158 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 15:03:01.478809 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 15:03:01.516104 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:03:01.522295 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 15:03:01.554726 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 15:03:01.555492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:03:01.562677 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:03:01.568028 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 15:03:01.572294 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 15:03:01.572536 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 15:03:01.580968 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 15:03:01.585707 systemd[1]: Stopped target basic.target - Basic System. Nov 5 15:03:01.592067 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 15:03:01.595905 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 15:03:01.599996 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 15:03:01.602241 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 15:03:01.607055 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 15:03:01.611648 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 15:03:01.616174 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 15:03:01.621318 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 15:03:01.625561 systemd[1]: Stopped target swap.target - Swaps. Nov 5 15:03:01.631488 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 15:03:01.631725 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 15:03:01.639307 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:03:01.643713 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:03:01.646100 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 15:03:01.646319 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:03:01.651228 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 15:03:01.651482 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 15:03:01.663239 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 15:03:01.663983 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 15:03:01.665780 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 15:03:01.666006 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 15:03:01.672589 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 15:03:01.683521 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 15:03:01.685509 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 15:03:01.685853 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:03:01.688900 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 15:03:01.689107 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:03:01.693859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 15:03:01.695085 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 15:03:01.735488 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 15:03:01.735698 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 15:03:01.762507 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 15:03:01.772173 ignition[1359]: INFO : Ignition 2.22.0 Nov 5 15:03:01.772173 ignition[1359]: INFO : Stage: umount Nov 5 15:03:01.777672 ignition[1359]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 15:03:01.777672 ignition[1359]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Nov 5 15:03:01.777672 ignition[1359]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Nov 5 15:03:01.774296 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 15:03:01.789368 ignition[1359]: INFO : PUT result: OK Nov 5 15:03:01.775583 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 15:03:01.795666 ignition[1359]: INFO : umount: umount passed Nov 5 15:03:01.797596 ignition[1359]: INFO : Ignition finished successfully Nov 5 15:03:01.802543 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 15:03:01.804283 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 15:03:01.807892 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 15:03:01.807985 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 15:03:01.811398 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 15:03:01.811486 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 15:03:01.813868 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 5 15:03:01.813958 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 5 15:03:01.816011 systemd[1]: Stopped target network.target - Network. Nov 5 15:03:01.819917 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 15:03:01.820010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 15:03:01.823723 systemd[1]: Stopped target paths.target - Path Units. Nov 5 15:03:01.827944 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 15:03:01.828075 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:03:01.831736 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 15:03:01.836180 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 15:03:01.839828 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 15:03:01.839902 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 15:03:01.843959 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 15:03:01.844023 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 15:03:01.848313 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 15:03:01.848408 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 15:03:01.852193 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 15:03:01.852302 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 15:03:01.856543 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 15:03:01.856629 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 15:03:01.862049 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 15:03:01.866968 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 15:03:01.886271 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 15:03:01.888337 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 15:03:01.902738 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 15:03:01.906261 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 15:03:01.914493 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 15:03:01.920797 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 15:03:01.920868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:03:01.930481 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 15:03:01.937395 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 15:03:01.937524 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 15:03:01.956514 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 15:03:01.956624 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:03:01.963680 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 15:03:01.963772 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 15:03:01.966868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:03:02.003152 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 15:03:02.003620 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:03:02.012407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 15:03:02.012791 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 15:03:02.019815 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 15:03:02.019909 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:03:02.024548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 15:03:02.024727 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 15:03:02.030165 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 15:03:02.030319 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 15:03:02.037757 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 15:03:02.037873 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 15:03:02.051116 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 15:03:02.057140 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 15:03:02.057460 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:03:02.068189 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 15:03:02.068916 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:03:02.077619 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 5 15:03:02.077720 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:03:02.081125 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 15:03:02.081255 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:03:02.089500 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 15:03:02.089594 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:03:02.094304 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 15:03:02.104223 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 15:03:02.122668 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 15:03:02.124367 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 15:03:02.128018 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 15:03:02.132915 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 15:03:02.179769 systemd[1]: Switching root. Nov 5 15:03:02.283401 systemd-journald[358]: Journal stopped Nov 5 15:03:06.254003 systemd-journald[358]: Received SIGTERM from PID 1 (systemd). Nov 5 15:03:06.254144 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 15:03:06.254198 kernel: SELinux: policy capability open_perms=1 Nov 5 15:03:06.254261 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 15:03:06.254294 kernel: SELinux: policy capability always_check_network=0 Nov 5 15:03:06.254327 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 15:03:06.254359 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 15:03:06.254388 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 15:03:06.254418 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 15:03:06.254454 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 15:03:06.254483 kernel: audit: type=1403 audit(1762354983.230:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 15:03:06.254530 systemd[1]: Successfully loaded SELinux policy in 118.679ms. Nov 5 15:03:06.254577 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.739ms. Nov 5 15:03:06.254624 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 15:03:06.254655 systemd[1]: Detected virtualization amazon. Nov 5 15:03:06.254686 systemd[1]: Detected architecture arm64. Nov 5 15:03:06.254721 systemd[1]: Detected first boot. Nov 5 15:03:06.254751 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 15:03:06.254784 zram_generator::config[1402]: No configuration found. Nov 5 15:03:06.254816 kernel: NET: Registered PF_VSOCK protocol family Nov 5 15:03:06.254848 systemd[1]: Populated /etc with preset unit settings. Nov 5 15:03:06.254879 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 15:03:06.254908 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 15:03:06.254945 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 15:03:06.254978 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 15:03:06.255008 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 15:03:06.255039 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 15:03:06.255071 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 15:03:06.255105 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 15:03:06.255137 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 15:03:06.255173 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 15:03:06.257243 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 15:03:06.257311 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 15:03:06.257356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 15:03:06.257389 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 15:03:06.257421 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 15:03:06.257454 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 15:03:06.257491 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 15:03:06.257522 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 5 15:03:06.257554 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 15:03:06.257585 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 15:03:06.257616 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 15:03:06.257648 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 15:03:06.257681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 15:03:06.257714 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 15:03:06.257746 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 15:03:06.257779 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 15:03:06.257810 systemd[1]: Reached target slices.target - Slice Units. Nov 5 15:03:06.257840 systemd[1]: Reached target swap.target - Swaps. Nov 5 15:03:06.257870 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 15:03:06.257905 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 15:03:06.257954 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 15:03:06.257987 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 15:03:06.258020 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 15:03:06.258052 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 15:03:06.260326 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 15:03:06.260385 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 15:03:06.260425 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 15:03:06.260458 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 15:03:06.260488 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 15:03:06.260519 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 15:03:06.260549 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 15:03:06.260581 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 15:03:06.260611 systemd[1]: Reached target machines.target - Containers. Nov 5 15:03:06.260644 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 15:03:06.260674 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:03:06.260706 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 15:03:06.260738 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 15:03:06.260768 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:03:06.260797 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:03:06.260829 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:03:06.260862 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 15:03:06.260893 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:03:06.260923 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 15:03:06.260953 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 15:03:06.260986 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 15:03:06.261019 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 15:03:06.261050 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 15:03:06.261086 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:03:06.261117 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 15:03:06.261149 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 15:03:06.261315 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 15:03:06.261356 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 15:03:06.261391 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 15:03:06.263873 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 15:03:06.263911 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 15:03:06.263941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 15:03:06.263970 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 15:03:06.264006 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 15:03:06.264036 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 15:03:06.264065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 15:03:06.264095 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 15:03:06.264124 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 15:03:06.264154 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 15:03:06.264185 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:03:06.264347 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:03:06.264386 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:03:06.264418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:03:06.264448 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:03:06.264483 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:03:06.264513 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 15:03:06.264545 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:03:06.264626 systemd-journald[1481]: Collecting audit messages is disabled. Nov 5 15:03:06.264677 kernel: fuse: init (API version 7.41) Nov 5 15:03:06.264711 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 15:03:06.264740 systemd-journald[1481]: Journal started Nov 5 15:03:06.264791 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec234947078f3b16a1e07e9d7975cad8) is 8M, max 75.3M, 67.3M free. Nov 5 15:03:05.638114 systemd[1]: Queued start job for default target multi-user.target. Nov 5 15:03:05.662052 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Nov 5 15:03:05.662966 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 15:03:06.272942 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 15:03:06.273092 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 15:03:06.274910 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 15:03:06.284438 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 15:03:06.300290 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 15:03:06.304151 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 15:03:06.321165 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 15:03:06.337498 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 15:03:06.340351 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 15:03:06.340408 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 15:03:06.345107 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 15:03:06.365248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:03:06.370720 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 15:03:06.378676 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 15:03:06.381396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:03:06.385675 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 15:03:06.392590 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 15:03:06.399598 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 15:03:06.412460 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 15:03:06.418393 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 15:03:06.423845 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 15:03:06.431189 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 15:03:06.438131 kernel: ACPI: bus type drm_connector registered Nov 5 15:03:06.445008 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:03:06.445541 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:03:06.452566 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 15:03:06.467371 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec234947078f3b16a1e07e9d7975cad8 is 72.562ms for 914 entries. Nov 5 15:03:06.467371 systemd-journald[1481]: System Journal (/var/log/journal/ec234947078f3b16a1e07e9d7975cad8) is 8M, max 588.1M, 580.1M free. Nov 5 15:03:06.559532 systemd-journald[1481]: Received client request to flush runtime journal. Nov 5 15:03:06.480448 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Nov 5 15:03:06.480471 systemd-tmpfiles[1503]: ACLs are not supported, ignoring. Nov 5 15:03:06.495955 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 15:03:06.504632 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 15:03:06.509293 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 15:03:06.514189 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 15:03:06.522584 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 15:03:06.566493 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 15:03:06.621245 kernel: loop1: detected capacity change from 0 to 100624 Nov 5 15:03:06.627040 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 15:03:06.661491 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 15:03:06.668795 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 15:03:06.704279 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 15:03:06.724464 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 15:03:06.734486 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 15:03:06.740573 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 15:03:06.787594 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 5 15:03:06.788115 systemd-tmpfiles[1556]: ACLs are not supported, ignoring. Nov 5 15:03:06.796316 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 15:03:06.811457 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 15:03:06.887029 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 15:03:07.022262 kernel: loop2: detected capacity change from 0 to 119344 Nov 5 15:03:07.045404 systemd-resolved[1555]: Positive Trust Anchors: Nov 5 15:03:07.045433 systemd-resolved[1555]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 15:03:07.045442 systemd-resolved[1555]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 15:03:07.045502 systemd-resolved[1555]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 15:03:07.060687 systemd-resolved[1555]: Defaulting to hostname 'linux'. Nov 5 15:03:07.064455 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 15:03:07.067131 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 15:03:07.309253 kernel: loop3: detected capacity change from 0 to 211168 Nov 5 15:03:07.452315 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 15:03:07.460840 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 15:03:07.520616 systemd-udevd[1569]: Using default interface naming scheme 'v257'. Nov 5 15:03:07.597275 kernel: loop4: detected capacity change from 0 to 61264 Nov 5 15:03:07.637726 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 15:03:07.645593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 15:03:07.748852 (udev-worker)[1572]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:07.749235 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 5 15:03:07.892134 systemd-networkd[1575]: lo: Link UP Nov 5 15:03:07.892150 systemd-networkd[1575]: lo: Gained carrier Nov 5 15:03:07.897629 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 15:03:07.902364 systemd[1]: Reached target network.target - Network. Nov 5 15:03:07.908472 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 15:03:07.917537 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 15:03:07.924539 systemd-networkd[1575]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:03:07.924565 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 15:03:07.938837 systemd-networkd[1575]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:03:07.938983 systemd-networkd[1575]: eth0: Link UP Nov 5 15:03:07.941817 systemd-networkd[1575]: eth0: Gained carrier Nov 5 15:03:07.941867 systemd-networkd[1575]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 15:03:07.951422 systemd-networkd[1575]: eth0: DHCPv4 address 172.31.23.78/20, gateway 172.31.16.1 acquired from 172.31.16.1 Nov 5 15:03:07.959269 kernel: loop5: detected capacity change from 0 to 100624 Nov 5 15:03:07.983246 kernel: loop6: detected capacity change from 0 to 119344 Nov 5 15:03:08.008274 kernel: loop7: detected capacity change from 0 to 211168 Nov 5 15:03:08.010728 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 15:03:08.041287 kernel: loop1: detected capacity change from 0 to 61264 Nov 5 15:03:08.068841 (sd-merge)[1610]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-ami.raw'. Nov 5 15:03:08.088459 (sd-merge)[1610]: Merged extensions into '/usr'. Nov 5 15:03:08.135555 systemd[1]: Reload requested from client PID 1535 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 15:03:08.135588 systemd[1]: Reloading... Nov 5 15:03:08.415236 zram_generator::config[1661]: No configuration found. Nov 5 15:03:08.978980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Nov 5 15:03:08.982461 systemd[1]: Reloading finished in 845 ms. Nov 5 15:03:09.008315 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 15:03:09.085716 systemd[1]: Starting ensure-sysext.service... Nov 5 15:03:09.095490 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 15:03:09.111023 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 15:03:09.117703 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 15:03:09.158988 systemd[1]: Reload requested from client PID 1787 ('systemctl') (unit ensure-sysext.service)... Nov 5 15:03:09.159019 systemd[1]: Reloading... Nov 5 15:03:09.209234 systemd-tmpfiles[1789]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 15:03:09.209985 systemd-tmpfiles[1789]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 15:03:09.210796 systemd-tmpfiles[1789]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 15:03:09.211533 systemd-tmpfiles[1789]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 15:03:09.213505 systemd-tmpfiles[1789]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 15:03:09.214325 systemd-tmpfiles[1789]: ACLs are not supported, ignoring. Nov 5 15:03:09.214587 systemd-tmpfiles[1789]: ACLs are not supported, ignoring. Nov 5 15:03:09.225838 systemd-tmpfiles[1789]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:03:09.226057 systemd-tmpfiles[1789]: Skipping /boot Nov 5 15:03:09.246554 systemd-tmpfiles[1789]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 15:03:09.246576 systemd-tmpfiles[1789]: Skipping /boot Nov 5 15:03:09.313330 zram_generator::config[1828]: No configuration found. Nov 5 15:03:09.357379 systemd-networkd[1575]: eth0: Gained IPv6LL Nov 5 15:03:09.753563 systemd[1]: Reloading finished in 593 ms. Nov 5 15:03:09.771475 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 15:03:09.801764 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 15:03:09.805562 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 15:03:09.811655 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 15:03:09.830819 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 15:03:09.835885 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:03:09.853452 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 15:03:09.863654 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 15:03:09.874991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 15:03:09.881816 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 15:03:09.896506 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:03:09.899232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 15:03:09.907152 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 15:03:09.920809 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 15:03:09.925849 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:03:09.926121 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:03:09.935388 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:03:09.935758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:03:09.935986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:03:09.944199 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 15:03:09.947803 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 15:03:09.952781 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 15:03:09.953038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 15:03:09.953383 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 15:03:09.972627 systemd[1]: Finished ensure-sysext.service. Nov 5 15:03:09.999010 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 15:03:10.006427 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 15:03:10.007517 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 15:03:10.011232 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 15:03:10.011584 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 15:03:10.014886 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 15:03:10.015351 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 15:03:10.028687 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 15:03:10.045437 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 15:03:10.046001 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 15:03:10.049666 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 15:03:10.055317 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 15:03:10.127465 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 15:03:10.130891 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 15:03:10.135419 augenrules[1921]: No rules Nov 5 15:03:10.138008 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:03:10.138559 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:03:12.983593 ldconfig[1887]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 15:03:12.990367 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 15:03:12.996161 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 15:03:13.023912 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 15:03:13.026965 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 15:03:13.029456 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 15:03:13.032343 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 15:03:13.035509 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 15:03:13.038160 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 15:03:13.041071 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 15:03:13.043882 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 15:03:13.043938 systemd[1]: Reached target paths.target - Path Units. Nov 5 15:03:13.046033 systemd[1]: Reached target timers.target - Timer Units. Nov 5 15:03:13.049437 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 15:03:13.054453 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 15:03:13.060795 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 15:03:13.064251 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 15:03:13.067220 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 15:03:13.073170 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 15:03:13.076374 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 15:03:13.080133 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 15:03:13.083013 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 15:03:13.085110 systemd[1]: Reached target basic.target - Basic System. Nov 5 15:03:13.087114 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:03:13.087291 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 15:03:13.090283 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 15:03:13.109497 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 5 15:03:13.116711 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 15:03:13.126494 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 15:03:13.144183 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 15:03:13.150831 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 15:03:13.153341 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 15:03:13.158620 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:13.167431 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 15:03:13.174385 systemd[1]: Started ntpd.service - Network Time Service. Nov 5 15:03:13.183782 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 15:03:13.192566 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 15:03:13.202239 jq[1937]: false Nov 5 15:03:13.203574 systemd[1]: Starting setup-oem.service - Setup OEM... Nov 5 15:03:13.212483 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 15:03:13.218627 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 15:03:13.234553 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 15:03:13.236979 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 15:03:13.250928 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 15:03:13.261728 extend-filesystems[1938]: Found /dev/nvme0n1p6 Nov 5 15:03:13.262742 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 15:03:13.282472 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 15:03:13.304794 extend-filesystems[1938]: Found /dev/nvme0n1p9 Nov 5 15:03:13.311304 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 15:03:13.314692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 15:03:13.315089 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 15:03:13.329142 extend-filesystems[1938]: Checking size of /dev/nvme0n1p9 Nov 5 15:03:13.352195 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 15:03:13.352657 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 15:03:13.425784 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 15:03:13.428953 jq[1957]: true Nov 5 15:03:13.435134 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 15:03:13.441907 extend-filesystems[1938]: Resized partition /dev/nvme0n1p9 Nov 5 15:03:13.459238 extend-filesystems[1995]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 15:03:13.471327 tar[1965]: linux-arm64/LICENSE Nov 5 15:03:13.479851 tar[1965]: linux-arm64/helm Nov 5 15:03:13.484343 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 1617920 to 2604027 blocks Nov 5 15:03:13.520236 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 2604027 Nov 5 15:03:13.538395 extend-filesystems[1995]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Nov 5 15:03:13.538395 extend-filesystems[1995]: old_desc_blocks = 1, new_desc_blocks = 2 Nov 5 15:03:13.538395 extend-filesystems[1995]: The filesystem on /dev/nvme0n1p9 is now 2604027 (4k) blocks long. Nov 5 15:03:13.554342 extend-filesystems[1938]: Resized filesystem in /dev/nvme0n1p9 Nov 5 15:03:13.549481 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 15:03:13.560299 (ntainerd)[1997]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 15:03:13.564643 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 15:03:13.574182 ntpd[1941]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:54 UTC 2025 (1): Starting Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: ntpd 4.2.8p18@1.4062-o Wed Nov 5 13:12:54 UTC 2025 (1): Starting Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: ---------------------------------------------------- Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: corporation. Support and training for ntp-4 are Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: available at https://www.nwtime.org/support Nov 5 15:03:13.580713 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: ---------------------------------------------------- Nov 5 15:03:13.574342 ntpd[1941]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Nov 5 15:03:13.574361 ntpd[1941]: ---------------------------------------------------- Nov 5 15:03:13.574378 ntpd[1941]: ntp-4 is maintained by Network Time Foundation, Nov 5 15:03:13.574394 ntpd[1941]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Nov 5 15:03:13.574410 ntpd[1941]: corporation. Support and training for ntp-4 are Nov 5 15:03:13.574426 ntpd[1941]: available at https://www.nwtime.org/support Nov 5 15:03:13.574442 ntpd[1941]: ---------------------------------------------------- Nov 5 15:03:13.585639 ntpd[1941]: proto: precision = 0.096 usec (-23) Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: proto: precision = 0.096 usec (-23) Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: basedate set to 2025-10-24 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: gps base set to 2025-10-26 (week 2390) Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen normally on 3 eth0 172.31.23.78:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen normally on 4 lo [::1]:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listen normally on 5 eth0 [fe80::431:9aff:fe88:da57%2]:123 Nov 5 15:03:13.592514 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Nov 5 15:03:13.586071 ntpd[1941]: basedate set to 2025-10-24 Nov 5 15:03:13.586094 ntpd[1941]: gps base set to 2025-10-26 (week 2390) Nov 5 15:03:13.587939 ntpd[1941]: Listen and drop on 0 v6wildcard [::]:123 Nov 5 15:03:13.587995 ntpd[1941]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Nov 5 15:03:13.592100 ntpd[1941]: Listen normally on 2 lo 127.0.0.1:123 Nov 5 15:03:13.592152 ntpd[1941]: Listen normally on 3 eth0 172.31.23.78:123 Nov 5 15:03:13.592200 ntpd[1941]: Listen normally on 4 lo [::1]:123 Nov 5 15:03:13.592274 ntpd[1941]: Listen normally on 5 eth0 [fe80::431:9aff:fe88:da57%2]:123 Nov 5 15:03:13.592315 ntpd[1941]: Listening on routing socket on fd #22 for interface updates Nov 5 15:03:13.594423 dbus-daemon[1935]: [system] SELinux support is enabled Nov 5 15:03:13.594773 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 15:03:13.601874 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 15:03:13.601915 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 15:03:13.604988 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 15:03:13.605022 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 15:03:13.619578 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:03:13.624777 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:03:13.624777 ntpd[1941]: 5 Nov 15:03:13 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:03:13.619644 ntpd[1941]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Nov 5 15:03:13.628320 jq[1996]: true Nov 5 15:03:13.629690 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 15:03:13.639235 update_engine[1951]: I20251105 15:03:13.629539 1951 main.cc:92] Flatcar Update Engine starting Nov 5 15:03:13.653146 dbus-daemon[1935]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1575 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Nov 5 15:03:13.662986 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Nov 5 15:03:13.666266 systemd[1]: Finished setup-oem.service - Setup OEM. Nov 5 15:03:13.681505 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Nov 5 15:03:13.685098 update_engine[1951]: I20251105 15:03:13.685009 1951 update_check_scheduler.cc:74] Next update check in 11m59s Nov 5 15:03:13.688933 systemd[1]: Started update-engine.service - Update Engine. Nov 5 15:03:13.704972 coreos-metadata[1934]: Nov 05 15:03:13.704 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 15:03:13.713790 coreos-metadata[1934]: Nov 05 15:03:13.713 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Nov 5 15:03:13.717938 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 15:03:13.725650 coreos-metadata[1934]: Nov 05 15:03:13.725 INFO Fetch successful Nov 5 15:03:13.725932 coreos-metadata[1934]: Nov 05 15:03:13.725 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Nov 5 15:03:13.728091 coreos-metadata[1934]: Nov 05 15:03:13.727 INFO Fetch successful Nov 5 15:03:13.728504 coreos-metadata[1934]: Nov 05 15:03:13.728 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Nov 5 15:03:13.729105 coreos-metadata[1934]: Nov 05 15:03:13.728 INFO Fetch successful Nov 5 15:03:13.729394 coreos-metadata[1934]: Nov 05 15:03:13.729 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Nov 5 15:03:13.733552 coreos-metadata[1934]: Nov 05 15:03:13.733 INFO Fetch successful Nov 5 15:03:13.734557 coreos-metadata[1934]: Nov 05 15:03:13.733 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Nov 5 15:03:13.734557 coreos-metadata[1934]: Nov 05 15:03:13.734 INFO Fetch failed with 404: resource not found Nov 5 15:03:13.734729 coreos-metadata[1934]: Nov 05 15:03:13.734 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetch successful Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetch successful Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetch successful Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetch successful Nov 5 15:03:13.749461 coreos-metadata[1934]: Nov 05 15:03:13.749 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Nov 5 15:03:13.752706 coreos-metadata[1934]: Nov 05 15:03:13.752 INFO Fetch successful Nov 5 15:03:13.960322 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 5 15:03:13.963752 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 15:03:14.094033 amazon-ssm-agent[2016]: Initializing new seelog logger Nov 5 15:03:14.094033 amazon-ssm-agent[2016]: New Seelog Logger Creation Complete Nov 5 15:03:14.094033 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.094033 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.096084 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 processing appconfig overrides Nov 5 15:03:14.097018 systemd-logind[1950]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 15:03:14.098177 systemd-logind[1950]: Watching system buttons on /dev/input/event1 (Sleep Button) Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 processing appconfig overrides Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.106257 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 processing appconfig overrides Nov 5 15:03:14.106135 systemd-logind[1950]: New seat seat0. Nov 5 15:03:14.111417 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1003 INFO Proxy environment variables: Nov 5 15:03:14.112751 bash[2054]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:03:14.116357 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 15:03:14.125379 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.125379 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:14.125379 amazon-ssm-agent[2016]: 2025/11/05 15:03:14 processing appconfig overrides Nov 5 15:03:14.130585 systemd[1]: Starting sshkeys.service... Nov 5 15:03:14.135720 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 15:03:14.155636 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Nov 5 15:03:14.163515 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.hostname1' Nov 5 15:03:14.178761 dbus-daemon[1935]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2014 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Nov 5 15:03:14.198976 systemd[1]: Starting polkit.service - Authorization Manager... Nov 5 15:03:14.214345 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1025 INFO no_proxy: Nov 5 15:03:14.320320 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1025 INFO https_proxy: Nov 5 15:03:14.346083 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Nov 5 15:03:14.354322 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Nov 5 15:03:14.421235 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1025 INFO http_proxy: Nov 5 15:03:14.527540 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1027 INFO Checking if agent identity type OnPrem can be assumed Nov 5 15:03:14.621588 coreos-metadata[2125]: Nov 05 15:03:14.621 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Nov 5 15:03:14.629310 coreos-metadata[2125]: Nov 05 15:03:14.623 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Nov 5 15:03:14.629310 coreos-metadata[2125]: Nov 05 15:03:14.624 INFO Fetch successful Nov 5 15:03:14.629310 coreos-metadata[2125]: Nov 05 15:03:14.624 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Nov 5 15:03:14.629310 coreos-metadata[2125]: Nov 05 15:03:14.625 INFO Fetch successful Nov 5 15:03:14.629757 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.1028 INFO Checking if agent identity type EC2 can be assumed Nov 5 15:03:14.629824 unknown[2125]: wrote ssh authorized keys file for user: core Nov 5 15:03:14.705479 update-ssh-keys[2163]: Updated "/home/core/.ssh/authorized_keys" Nov 5 15:03:14.712341 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Nov 5 15:03:14.728960 systemd[1]: Finished sshkeys.service. Nov 5 15:03:14.740454 polkitd[2097]: Started polkitd version 126 Nov 5 15:03:14.769158 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7688 INFO Agent will take identity from EC2 Nov 5 15:03:14.786876 locksmithd[2017]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 15:03:14.824715 polkitd[2097]: Loading rules from directory /etc/polkit-1/rules.d Nov 5 15:03:14.836569 polkitd[2097]: Loading rules from directory /run/polkit-1/rules.d Nov 5 15:03:14.836680 polkitd[2097]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:03:14.838379 polkitd[2097]: Loading rules from directory /usr/local/share/polkit-1/rules.d Nov 5 15:03:14.838465 polkitd[2097]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Nov 5 15:03:14.838553 polkitd[2097]: Loading rules from directory /usr/share/polkit-1/rules.d Nov 5 15:03:14.851260 polkitd[2097]: Finished loading, compiling and executing 2 rules Nov 5 15:03:14.851759 systemd[1]: Started polkit.service - Authorization Manager. Nov 5 15:03:14.859899 containerd[1997]: time="2025-11-05T15:03:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 15:03:14.861369 containerd[1997]: time="2025-11-05T15:03:14.861315718Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 15:03:14.864991 dbus-daemon[1935]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Nov 5 15:03:14.867585 polkitd[2097]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Nov 5 15:03:14.878234 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7788 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Nov 5 15:03:14.885753 containerd[1997]: time="2025-11-05T15:03:14.885689183Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.904µs" Nov 5 15:03:14.887022 containerd[1997]: time="2025-11-05T15:03:14.886980587Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.887163515Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.889555835Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.889595279Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.889655303Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.889766135Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.889790987Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:03:14.890235 containerd[1997]: time="2025-11-05T15:03:14.890167055Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.890201375Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.890659079Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.890683007Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.890857739Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891268091Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891331415Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891356387Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891409127Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891830507Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 15:03:14.892231 containerd[1997]: time="2025-11-05T15:03:14.891938483Z" level=info msg="metadata content store policy set" policy=shared Nov 5 15:03:14.898756 containerd[1997]: time="2025-11-05T15:03:14.898695887Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 15:03:14.898994 containerd[1997]: time="2025-11-05T15:03:14.898963259Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 15:03:14.899320 containerd[1997]: time="2025-11-05T15:03:14.899283791Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 15:03:14.899598 containerd[1997]: time="2025-11-05T15:03:14.899567819Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 15:03:14.899915 containerd[1997]: time="2025-11-05T15:03:14.899724251Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 15:03:14.900173 containerd[1997]: time="2025-11-05T15:03:14.900143219Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 15:03:14.901340 containerd[1997]: time="2025-11-05T15:03:14.901285403Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 15:03:14.901410 containerd[1997]: time="2025-11-05T15:03:14.901344311Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 15:03:14.901410 containerd[1997]: time="2025-11-05T15:03:14.901381139Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 15:03:14.901503 containerd[1997]: time="2025-11-05T15:03:14.901419971Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 15:03:14.901503 containerd[1997]: time="2025-11-05T15:03:14.901446695Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 15:03:14.901503 containerd[1997]: time="2025-11-05T15:03:14.901478699Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 15:03:14.901760 containerd[1997]: time="2025-11-05T15:03:14.901715963Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 15:03:14.901814 containerd[1997]: time="2025-11-05T15:03:14.901769135Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 15:03:14.901858 containerd[1997]: time="2025-11-05T15:03:14.901818275Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 15:03:14.901858 containerd[1997]: time="2025-11-05T15:03:14.901847471Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 15:03:14.901943 containerd[1997]: time="2025-11-05T15:03:14.901876139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 15:03:14.901943 containerd[1997]: time="2025-11-05T15:03:14.901903283Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 15:03:14.901943 containerd[1997]: time="2025-11-05T15:03:14.901931159Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 15:03:14.902108 containerd[1997]: time="2025-11-05T15:03:14.901957523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 15:03:14.902108 containerd[1997]: time="2025-11-05T15:03:14.901984835Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 15:03:14.902108 containerd[1997]: time="2025-11-05T15:03:14.902032499Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 15:03:14.902108 containerd[1997]: time="2025-11-05T15:03:14.902060867Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 15:03:14.902523 containerd[1997]: time="2025-11-05T15:03:14.902481959Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 15:03:14.902577 containerd[1997]: time="2025-11-05T15:03:14.902526179Z" level=info msg="Start snapshots syncer" Nov 5 15:03:14.909385 containerd[1997]: time="2025-11-05T15:03:14.909316451Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 15:03:14.909970 containerd[1997]: time="2025-11-05T15:03:14.909891047Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 15:03:14.910185 containerd[1997]: time="2025-11-05T15:03:14.909994007Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 15:03:14.912312 containerd[1997]: time="2025-11-05T15:03:14.912255131Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 15:03:14.912571 containerd[1997]: time="2025-11-05T15:03:14.912526523Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 15:03:14.912629 containerd[1997]: time="2025-11-05T15:03:14.912585083Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 15:03:14.912629 containerd[1997]: time="2025-11-05T15:03:14.912616775Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 15:03:14.912724 containerd[1997]: time="2025-11-05T15:03:14.912647315Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 15:03:14.912724 containerd[1997]: time="2025-11-05T15:03:14.912679991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 15:03:14.912724 containerd[1997]: time="2025-11-05T15:03:14.912707903Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 15:03:14.912848 containerd[1997]: time="2025-11-05T15:03:14.912734903Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 15:03:14.912848 containerd[1997]: time="2025-11-05T15:03:14.912799943Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 15:03:14.912848 containerd[1997]: time="2025-11-05T15:03:14.912829631Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 15:03:14.912962 containerd[1997]: time="2025-11-05T15:03:14.912856835Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 15:03:14.920782 containerd[1997]: time="2025-11-05T15:03:14.920710787Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:03:14.920896 containerd[1997]: time="2025-11-05T15:03:14.920798267Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 15:03:14.920896 containerd[1997]: time="2025-11-05T15:03:14.920825435Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:03:14.920896 containerd[1997]: time="2025-11-05T15:03:14.920852771Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 15:03:14.920896 containerd[1997]: time="2025-11-05T15:03:14.920874659Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 15:03:14.921079 containerd[1997]: time="2025-11-05T15:03:14.920903735Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 15:03:14.921079 containerd[1997]: time="2025-11-05T15:03:14.920931527Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 15:03:14.921176 containerd[1997]: time="2025-11-05T15:03:14.921104783Z" level=info msg="runtime interface created" Nov 5 15:03:14.921176 containerd[1997]: time="2025-11-05T15:03:14.921122495Z" level=info msg="created NRI interface" Nov 5 15:03:14.921176 containerd[1997]: time="2025-11-05T15:03:14.921147503Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 15:03:14.921349 containerd[1997]: time="2025-11-05T15:03:14.921178283Z" level=info msg="Connect containerd service" Nov 5 15:03:14.921349 containerd[1997]: time="2025-11-05T15:03:14.921269195Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 15:03:14.931247 containerd[1997]: time="2025-11-05T15:03:14.930737099Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:03:14.947469 systemd-hostnamed[2014]: Hostname set to (transient) Nov 5 15:03:14.947507 systemd-resolved[1555]: System hostname changed to 'ip-172-31-23-78'. Nov 5 15:03:14.975731 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7788 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Nov 5 15:03:15.076842 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7789 INFO [amazon-ssm-agent] Starting Core Agent Nov 5 15:03:15.182473 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7789 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Nov 5 15:03:15.283382 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7789 INFO [Registrar] Starting registrar module Nov 5 15:03:15.390286 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7857 INFO [EC2Identity] Checking disk for registration info Nov 5 15:03:15.430197 containerd[1997]: time="2025-11-05T15:03:15.427067865Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 15:03:15.430607 containerd[1997]: time="2025-11-05T15:03:15.427250121Z" level=info msg="Start subscribing containerd event" Nov 5 15:03:15.430735 containerd[1997]: time="2025-11-05T15:03:15.430710069Z" level=info msg="Start recovering state" Nov 5 15:03:15.430982 containerd[1997]: time="2025-11-05T15:03:15.430956441Z" level=info msg="Start event monitor" Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433252029Z" level=info msg="Start cni network conf syncer for default" Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433289865Z" level=info msg="Start streaming server" Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433310541Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433332837Z" level=info msg="runtime interface starting up..." Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433347597Z" level=info msg="starting plugins..." Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433383765Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 15:03:15.436929 containerd[1997]: time="2025-11-05T15:03:15.433385433Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 15:03:15.433823 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 15:03:15.438070 containerd[1997]: time="2025-11-05T15:03:15.437861193Z" level=info msg="containerd successfully booted in 0.578657s" Nov 5 15:03:15.490301 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7858 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Nov 5 15:03:15.585732 tar[1965]: linux-arm64/README.md Nov 5 15:03:15.591141 amazon-ssm-agent[2016]: 2025-11-05 15:03:14.7858 INFO [EC2Identity] Generating registration keypair Nov 5 15:03:15.622329 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 15:03:15.676966 sshd_keygen[1980]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 15:03:15.720341 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 15:03:15.729681 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 15:03:15.768975 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 15:03:15.769665 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 15:03:15.778634 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 15:03:15.814277 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 15:03:15.821746 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 15:03:15.829962 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 5 15:03:15.834393 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 15:03:15.885948 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.8857 INFO [EC2Identity] Checking write access before registering Nov 5 15:03:15.935322 amazon-ssm-agent[2016]: 2025/11/05 15:03:15 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:15.935322 amazon-ssm-agent[2016]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Nov 5 15:03:15.935497 amazon-ssm-agent[2016]: 2025/11/05 15:03:15 processing appconfig overrides Nov 5 15:03:15.971156 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.8870 INFO [EC2Identity] Registering EC2 instance with Systems Manager Nov 5 15:03:15.971156 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9349 INFO [EC2Identity] EC2 registration was successful. Nov 5 15:03:15.971156 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9349 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Nov 5 15:03:15.971373 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9350 INFO [CredentialRefresher] credentialRefresher has started Nov 5 15:03:15.971373 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9351 INFO [CredentialRefresher] Starting credentials refresher loop Nov 5 15:03:15.971373 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9707 INFO EC2RoleProvider Successfully connected with instance profile role credentials Nov 5 15:03:15.971373 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9710 INFO [CredentialRefresher] Credentials ready Nov 5 15:03:15.986994 amazon-ssm-agent[2016]: 2025-11-05 15:03:15.9712 INFO [CredentialRefresher] Next credential rotation will be in 29.9999913337 minutes Nov 5 15:03:17.001403 amazon-ssm-agent[2016]: 2025-11-05 15:03:17.0009 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Nov 5 15:03:17.102705 amazon-ssm-agent[2016]: 2025-11-05 15:03:17.0039 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2219) started Nov 5 15:03:17.204018 amazon-ssm-agent[2016]: 2025-11-05 15:03:17.0039 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Nov 5 15:03:18.417598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:18.421487 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 15:03:18.427451 systemd[1]: Startup finished in 3.892s (kernel) + 11.910s (initrd) + 15.315s (userspace) = 31.118s. Nov 5 15:03:18.439736 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:03:18.970090 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 15:03:18.972333 systemd[1]: Started sshd@0-172.31.23.78:22-139.178.89.65:45842.service - OpenSSH per-connection server daemon (139.178.89.65:45842). Nov 5 15:03:19.347651 sshd[2242]: Accepted publickey for core from 139.178.89.65 port 45842 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:19.352360 sshd-session[2242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:19.380302 systemd-logind[1950]: New session 1 of user core. Nov 5 15:03:19.381726 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 15:03:19.385406 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 15:03:19.419994 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 15:03:19.427624 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 15:03:19.449438 (systemd)[2250]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 15:03:19.453997 systemd-logind[1950]: New session c1 of user core. Nov 5 15:03:19.747161 systemd[2250]: Queued start job for default target default.target. Nov 5 15:03:19.765521 systemd[2250]: Created slice app.slice - User Application Slice. Nov 5 15:03:19.765585 systemd[2250]: Reached target paths.target - Paths. Nov 5 15:03:19.765673 systemd[2250]: Reached target timers.target - Timers. Nov 5 15:03:19.771419 systemd[2250]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 15:03:19.805304 systemd[2250]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 15:03:19.805554 systemd[2250]: Reached target sockets.target - Sockets. Nov 5 15:03:19.805638 systemd[2250]: Reached target basic.target - Basic System. Nov 5 15:03:19.805718 systemd[2250]: Reached target default.target - Main User Target. Nov 5 15:03:19.805782 systemd[2250]: Startup finished in 339ms. Nov 5 15:03:19.806247 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 15:03:19.830286 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 15:03:19.990685 systemd[1]: Started sshd@1-172.31.23.78:22-139.178.89.65:45852.service - OpenSSH per-connection server daemon (139.178.89.65:45852). Nov 5 15:03:20.205577 sshd[2261]: Accepted publickey for core from 139.178.89.65 port 45852 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:20.207503 sshd-session[2261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:20.215955 systemd-logind[1950]: New session 2 of user core. Nov 5 15:03:20.225503 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 15:03:20.319487 kubelet[2234]: E1105 15:03:20.319401 2234 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:03:20.325033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:03:20.325406 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:03:20.326332 systemd[1]: kubelet.service: Consumed 1.479s CPU time, 257.4M memory peak. Nov 5 15:03:20.356977 sshd[2264]: Connection closed by 139.178.89.65 port 45852 Nov 5 15:03:20.357763 sshd-session[2261]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:20.364785 systemd[1]: sshd@1-172.31.23.78:22-139.178.89.65:45852.service: Deactivated successfully. Nov 5 15:03:20.368559 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 15:03:20.371308 systemd-logind[1950]: Session 2 logged out. Waiting for processes to exit. Nov 5 15:03:20.373875 systemd-logind[1950]: Removed session 2. Nov 5 15:03:20.395626 systemd[1]: Started sshd@2-172.31.23.78:22-139.178.89.65:45868.service - OpenSSH per-connection server daemon (139.178.89.65:45868). Nov 5 15:03:21.021138 systemd-resolved[1555]: Clock change detected. Flushing caches. Nov 5 15:03:21.050019 sshd[2273]: Accepted publickey for core from 139.178.89.65 port 45868 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:21.052458 sshd-session[2273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:21.061986 systemd-logind[1950]: New session 3 of user core. Nov 5 15:03:21.080169 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 15:03:21.199825 sshd[2276]: Connection closed by 139.178.89.65 port 45868 Nov 5 15:03:21.199504 sshd-session[2273]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:21.207250 systemd[1]: sshd@2-172.31.23.78:22-139.178.89.65:45868.service: Deactivated successfully. Nov 5 15:03:21.210729 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 15:03:21.212826 systemd-logind[1950]: Session 3 logged out. Waiting for processes to exit. Nov 5 15:03:21.215783 systemd-logind[1950]: Removed session 3. Nov 5 15:03:21.239512 systemd[1]: Started sshd@3-172.31.23.78:22-139.178.89.65:45884.service - OpenSSH per-connection server daemon (139.178.89.65:45884). Nov 5 15:03:21.434357 sshd[2282]: Accepted publickey for core from 139.178.89.65 port 45884 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:21.436651 sshd-session[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:21.446000 systemd-logind[1950]: New session 4 of user core. Nov 5 15:03:21.457172 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 15:03:21.584349 sshd[2285]: Connection closed by 139.178.89.65 port 45884 Nov 5 15:03:21.584812 sshd-session[2282]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:21.592504 systemd[1]: sshd@3-172.31.23.78:22-139.178.89.65:45884.service: Deactivated successfully. Nov 5 15:03:21.597661 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 15:03:21.599467 systemd-logind[1950]: Session 4 logged out. Waiting for processes to exit. Nov 5 15:03:21.601797 systemd-logind[1950]: Removed session 4. Nov 5 15:03:21.621457 systemd[1]: Started sshd@4-172.31.23.78:22-139.178.89.65:45894.service - OpenSSH per-connection server daemon (139.178.89.65:45894). Nov 5 15:03:21.812194 sshd[2291]: Accepted publickey for core from 139.178.89.65 port 45894 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:21.814600 sshd-session[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:21.823969 systemd-logind[1950]: New session 5 of user core. Nov 5 15:03:21.831132 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 15:03:22.038634 sudo[2295]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 15:03:22.039480 sudo[2295]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:03:22.054745 sudo[2295]: pam_unix(sudo:session): session closed for user root Nov 5 15:03:22.079396 sshd[2294]: Connection closed by 139.178.89.65 port 45894 Nov 5 15:03:22.079121 sshd-session[2291]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:22.086555 systemd-logind[1950]: Session 5 logged out. Waiting for processes to exit. Nov 5 15:03:22.086798 systemd[1]: sshd@4-172.31.23.78:22-139.178.89.65:45894.service: Deactivated successfully. Nov 5 15:03:22.090762 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 15:03:22.097029 systemd-logind[1950]: Removed session 5. Nov 5 15:03:22.116280 systemd[1]: Started sshd@5-172.31.23.78:22-139.178.89.65:45898.service - OpenSSH per-connection server daemon (139.178.89.65:45898). Nov 5 15:03:22.312811 sshd[2301]: Accepted publickey for core from 139.178.89.65 port 45898 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:22.315162 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:22.323973 systemd-logind[1950]: New session 6 of user core. Nov 5 15:03:22.333167 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 15:03:22.439536 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 15:03:22.440190 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:03:22.447861 sudo[2306]: pam_unix(sudo:session): session closed for user root Nov 5 15:03:22.459981 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 15:03:22.460558 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:03:22.477927 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 15:03:22.545258 augenrules[2328]: No rules Nov 5 15:03:22.546537 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 15:03:22.547045 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 15:03:22.549705 sudo[2305]: pam_unix(sudo:session): session closed for user root Nov 5 15:03:22.574933 sshd[2304]: Connection closed by 139.178.89.65 port 45898 Nov 5 15:03:22.574827 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Nov 5 15:03:22.581837 systemd[1]: sshd@5-172.31.23.78:22-139.178.89.65:45898.service: Deactivated successfully. Nov 5 15:03:22.585022 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 15:03:22.587864 systemd-logind[1950]: Session 6 logged out. Waiting for processes to exit. Nov 5 15:03:22.590204 systemd-logind[1950]: Removed session 6. Nov 5 15:03:22.609993 systemd[1]: Started sshd@6-172.31.23.78:22-139.178.89.65:45914.service - OpenSSH per-connection server daemon (139.178.89.65:45914). Nov 5 15:03:22.814147 sshd[2337]: Accepted publickey for core from 139.178.89.65 port 45914 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:03:22.816343 sshd-session[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:03:22.824448 systemd-logind[1950]: New session 7 of user core. Nov 5 15:03:22.833143 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 15:03:22.938197 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 15:03:22.938774 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 15:03:24.268494 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 15:03:24.286633 (dockerd)[2358]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 15:03:25.380108 dockerd[2358]: time="2025-11-05T15:03:25.379471385Z" level=info msg="Starting up" Nov 5 15:03:25.381063 dockerd[2358]: time="2025-11-05T15:03:25.381004385Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 15:03:25.401744 dockerd[2358]: time="2025-11-05T15:03:25.401674253Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 15:03:25.481841 dockerd[2358]: time="2025-11-05T15:03:25.481182642Z" level=info msg="Loading containers: start." Nov 5 15:03:25.497950 kernel: Initializing XFRM netlink socket Nov 5 15:03:26.052675 (udev-worker)[2378]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:03:26.166030 systemd-networkd[1575]: docker0: Link UP Nov 5 15:03:26.176279 dockerd[2358]: time="2025-11-05T15:03:26.176105585Z" level=info msg="Loading containers: done." Nov 5 15:03:26.207150 dockerd[2358]: time="2025-11-05T15:03:26.207065633Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 15:03:26.207733 dockerd[2358]: time="2025-11-05T15:03:26.207571229Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 15:03:26.208184 dockerd[2358]: time="2025-11-05T15:03:26.208125797Z" level=info msg="Initializing buildkit" Nov 5 15:03:26.268121 dockerd[2358]: time="2025-11-05T15:03:26.267582462Z" level=info msg="Completed buildkit initialization" Nov 5 15:03:26.284695 dockerd[2358]: time="2025-11-05T15:03:26.284631234Z" level=info msg="Daemon has completed initialization" Nov 5 15:03:26.285126 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 15:03:26.286714 dockerd[2358]: time="2025-11-05T15:03:26.286045098Z" level=info msg="API listen on /run/docker.sock" Nov 5 15:03:27.433731 containerd[1997]: time="2025-11-05T15:03:27.433659439Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Nov 5 15:03:28.121589 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4051186444.mount: Deactivated successfully. Nov 5 15:03:29.645246 containerd[1997]: time="2025-11-05T15:03:29.645164230Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:29.647345 containerd[1997]: time="2025-11-05T15:03:29.646962238Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390228" Nov 5 15:03:29.648935 containerd[1997]: time="2025-11-05T15:03:29.648859150Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:29.654958 containerd[1997]: time="2025-11-05T15:03:29.654873826Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:29.657139 containerd[1997]: time="2025-11-05T15:03:29.657076282Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 2.223353843s" Nov 5 15:03:29.657261 containerd[1997]: time="2025-11-05T15:03:29.657143998Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Nov 5 15:03:29.661339 containerd[1997]: time="2025-11-05T15:03:29.660860638Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Nov 5 15:03:30.905316 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 15:03:30.911253 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:31.305286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:31.318530 (kubelet)[2638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:03:31.443663 kubelet[2638]: E1105 15:03:31.443581 2638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:03:31.456055 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:03:31.456343 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:03:31.459522 systemd[1]: kubelet.service: Consumed 346ms CPU time, 107.1M memory peak. Nov 5 15:03:31.586386 containerd[1997]: time="2025-11-05T15:03:31.585927876Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:31.587960 containerd[1997]: time="2025-11-05T15:03:31.587654172Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547917" Nov 5 15:03:31.589328 containerd[1997]: time="2025-11-05T15:03:31.589254648Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:31.595393 containerd[1997]: time="2025-11-05T15:03:31.594953472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:31.597021 containerd[1997]: time="2025-11-05T15:03:31.596956176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.936021954s" Nov 5 15:03:31.597021 containerd[1997]: time="2025-11-05T15:03:31.597014424Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Nov 5 15:03:31.597729 containerd[1997]: time="2025-11-05T15:03:31.597666480Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Nov 5 15:03:33.112835 containerd[1997]: time="2025-11-05T15:03:33.112398720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.114938 containerd[1997]: time="2025-11-05T15:03:33.113410524Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295977" Nov 5 15:03:33.115152 containerd[1997]: time="2025-11-05T15:03:33.115109400Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.120185 containerd[1997]: time="2025-11-05T15:03:33.120128748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:33.122418 containerd[1997]: time="2025-11-05T15:03:33.122332692Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.524604928s" Nov 5 15:03:33.122418 containerd[1997]: time="2025-11-05T15:03:33.122411880Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Nov 5 15:03:33.123149 containerd[1997]: time="2025-11-05T15:03:33.123065592Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Nov 5 15:03:34.578607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3082063330.mount: Deactivated successfully. Nov 5 15:03:35.188363 containerd[1997]: time="2025-11-05T15:03:35.188301866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:35.190146 containerd[1997]: time="2025-11-05T15:03:35.190073234Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240106" Nov 5 15:03:35.191628 containerd[1997]: time="2025-11-05T15:03:35.191529746Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:35.194651 containerd[1997]: time="2025-11-05T15:03:35.194572502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:35.196181 containerd[1997]: time="2025-11-05T15:03:35.195922898Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 2.072761558s" Nov 5 15:03:35.196181 containerd[1997]: time="2025-11-05T15:03:35.195974810Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Nov 5 15:03:35.196924 containerd[1997]: time="2025-11-05T15:03:35.196860842Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Nov 5 15:03:35.854662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2801910352.mount: Deactivated successfully. Nov 5 15:03:37.065722 containerd[1997]: time="2025-11-05T15:03:37.065660691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:37.068064 containerd[1997]: time="2025-11-05T15:03:37.068004339Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Nov 5 15:03:37.070769 containerd[1997]: time="2025-11-05T15:03:37.069200175Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:37.076256 containerd[1997]: time="2025-11-05T15:03:37.076190331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:37.078439 containerd[1997]: time="2025-11-05T15:03:37.078380115Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.881104877s" Nov 5 15:03:37.078935 containerd[1997]: time="2025-11-05T15:03:37.078436911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Nov 5 15:03:37.079880 containerd[1997]: time="2025-11-05T15:03:37.079841295Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 15:03:37.524549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2693999807.mount: Deactivated successfully. Nov 5 15:03:37.531696 containerd[1997]: time="2025-11-05T15:03:37.530401074Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:03:37.531696 containerd[1997]: time="2025-11-05T15:03:37.531645594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Nov 5 15:03:37.532505 containerd[1997]: time="2025-11-05T15:03:37.532468014Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:03:37.536023 containerd[1997]: time="2025-11-05T15:03:37.535978506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 15:03:37.537394 containerd[1997]: time="2025-11-05T15:03:37.537339402Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 457.311831ms" Nov 5 15:03:37.537511 containerd[1997]: time="2025-11-05T15:03:37.537392670Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 15:03:37.538862 containerd[1997]: time="2025-11-05T15:03:37.538824642Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Nov 5 15:03:38.011687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4282955995.mount: Deactivated successfully. Nov 5 15:03:40.539674 containerd[1997]: time="2025-11-05T15:03:40.539591180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:40.541411 containerd[1997]: time="2025-11-05T15:03:40.540950336Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465857" Nov 5 15:03:40.544016 containerd[1997]: time="2025-11-05T15:03:40.543953216Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:40.549023 containerd[1997]: time="2025-11-05T15:03:40.548972492Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:03:40.551359 containerd[1997]: time="2025-11-05T15:03:40.551314881Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.012281463s" Nov 5 15:03:40.551538 containerd[1997]: time="2025-11-05T15:03:40.551493369Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Nov 5 15:03:41.655346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 5 15:03:41.660221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:41.998148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:42.012790 (kubelet)[2797]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 15:03:42.082875 kubelet[2797]: E1105 15:03:42.082819 2797 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 15:03:42.088367 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 15:03:42.089033 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 15:03:42.089575 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Nov 5 15:03:45.429479 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Nov 5 15:03:47.563126 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:47.563493 systemd[1]: kubelet.service: Consumed 295ms CPU time, 107M memory peak. Nov 5 15:03:47.575972 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:47.617474 systemd[1]: Reload requested from client PID 2814 ('systemctl') (unit session-7.scope)... Nov 5 15:03:47.617514 systemd[1]: Reloading... Nov 5 15:03:47.877927 zram_generator::config[2868]: No configuration found. Nov 5 15:03:48.326487 systemd[1]: Reloading finished in 708 ms. Nov 5 15:03:48.427937 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 15:03:48.428340 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 15:03:48.429113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:48.430019 systemd[1]: kubelet.service: Consumed 227ms CPU time, 95.1M memory peak. Nov 5 15:03:48.433532 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:48.778586 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:48.792385 (kubelet)[2923]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:03:48.869145 kubelet[2923]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:03:48.869145 kubelet[2923]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:03:48.869145 kubelet[2923]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:03:48.869664 kubelet[2923]: I1105 15:03:48.869220 2923 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:03:49.996925 kubelet[2923]: I1105 15:03:49.996181 2923 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:03:49.996925 kubelet[2923]: I1105 15:03:49.996221 2923 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:03:49.996925 kubelet[2923]: I1105 15:03:49.996626 2923 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:03:50.055103 kubelet[2923]: E1105 15:03:50.055035 2923 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.23.78:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 5 15:03:50.057318 kubelet[2923]: I1105 15:03:50.057267 2923 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:03:50.072554 kubelet[2923]: I1105 15:03:50.072515 2923 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:03:50.078919 kubelet[2923]: I1105 15:03:50.078185 2923 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:03:50.078919 kubelet[2923]: I1105 15:03:50.078783 2923 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:03:50.079360 kubelet[2923]: I1105 15:03:50.078833 2923 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:03:50.079724 kubelet[2923]: I1105 15:03:50.079703 2923 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:03:50.079822 kubelet[2923]: I1105 15:03:50.079805 2923 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:03:50.080250 kubelet[2923]: I1105 15:03:50.080229 2923 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:03:50.086672 kubelet[2923]: I1105 15:03:50.086632 2923 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:03:50.086850 kubelet[2923]: I1105 15:03:50.086829 2923 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:03:50.087026 kubelet[2923]: I1105 15:03:50.087008 2923 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:03:50.087134 kubelet[2923]: I1105 15:03:50.087116 2923 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:03:50.091629 kubelet[2923]: I1105 15:03:50.091581 2923 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:03:50.092855 kubelet[2923]: I1105 15:03:50.092800 2923 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:03:50.093117 kubelet[2923]: W1105 15:03:50.093083 2923 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 15:03:50.098937 kubelet[2923]: I1105 15:03:50.098260 2923 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:03:50.098937 kubelet[2923]: I1105 15:03:50.098332 2923 server.go:1289] "Started kubelet" Nov 5 15:03:50.098937 kubelet[2923]: E1105 15:03:50.098662 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-78&limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:03:50.111375 kubelet[2923]: E1105 15:03:50.111329 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:03:50.117408 kubelet[2923]: E1105 15:03:50.115084 2923 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.78:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.78:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-78.187524900121aaf0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-78,UID:ip-172-31-23-78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-78,},FirstTimestamp:2025-11-05 15:03:50.09829144 +0000 UTC m=+1.297470644,LastTimestamp:2025-11-05 15:03:50.09829144 +0000 UTC m=+1.297470644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-78,}" Nov 5 15:03:50.125348 kubelet[2923]: I1105 15:03:50.125271 2923 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:03:50.125616 kubelet[2923]: I1105 15:03:50.125573 2923 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:03:50.127321 kubelet[2923]: I1105 15:03:50.127289 2923 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:03:50.129949 kubelet[2923]: I1105 15:03:50.129591 2923 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:03:50.130387 kubelet[2923]: I1105 15:03:50.130357 2923 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:03:50.130551 kubelet[2923]: I1105 15:03:50.125360 2923 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:03:50.131631 kubelet[2923]: I1105 15:03:50.131576 2923 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:03:50.132988 kubelet[2923]: I1105 15:03:50.131833 2923 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:03:50.133972 kubelet[2923]: I1105 15:03:50.133929 2923 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:03:50.134963 kubelet[2923]: E1105 15:03:50.134869 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.23.78:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 5 15:03:50.135654 kubelet[2923]: I1105 15:03:50.135602 2923 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:03:50.135810 kubelet[2923]: I1105 15:03:50.135768 2923 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:03:50.139564 kubelet[2923]: I1105 15:03:50.139514 2923 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:03:50.149059 kubelet[2923]: E1105 15:03:50.149009 2923 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-78\" not found" Nov 5 15:03:50.167091 kubelet[2923]: I1105 15:03:50.167029 2923 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:03:50.169292 kubelet[2923]: I1105 15:03:50.169233 2923 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:03:50.169292 kubelet[2923]: I1105 15:03:50.169282 2923 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:03:50.169488 kubelet[2923]: I1105 15:03:50.169318 2923 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:03:50.169488 kubelet[2923]: I1105 15:03:50.169334 2923 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:03:50.169488 kubelet[2923]: E1105 15:03:50.169400 2923 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:03:50.178212 kubelet[2923]: E1105 15:03:50.178099 2923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-78?timeout=10s\": dial tcp 172.31.23.78:6443: connect: connection refused" interval="200ms" Nov 5 15:03:50.178728 kubelet[2923]: E1105 15:03:50.178657 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.23.78:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 5 15:03:50.190952 kubelet[2923]: I1105 15:03:50.189721 2923 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:03:50.190952 kubelet[2923]: I1105 15:03:50.189756 2923 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:03:50.190952 kubelet[2923]: I1105 15:03:50.189785 2923 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:03:50.193391 kubelet[2923]: I1105 15:03:50.193354 2923 policy_none.go:49] "None policy: Start" Nov 5 15:03:50.193499 kubelet[2923]: I1105 15:03:50.193399 2923 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:03:50.193499 kubelet[2923]: I1105 15:03:50.193425 2923 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:03:50.204060 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 15:03:50.227482 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 15:03:50.234797 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 15:03:50.251238 kubelet[2923]: E1105 15:03:50.249615 2923 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-78\" not found" Nov 5 15:03:50.251238 kubelet[2923]: E1105 15:03:50.249679 2923 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:03:50.251238 kubelet[2923]: I1105 15:03:50.250529 2923 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:03:50.251238 kubelet[2923]: I1105 15:03:50.250552 2923 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:03:50.251238 kubelet[2923]: I1105 15:03:50.251040 2923 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:03:50.256149 kubelet[2923]: E1105 15:03:50.255586 2923 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:03:50.256149 kubelet[2923]: E1105 15:03:50.255653 2923 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-78\" not found" Nov 5 15:03:50.292636 systemd[1]: Created slice kubepods-burstable-pod6690dcd0a8cf33572fa9792692755732.slice - libcontainer container kubepods-burstable-pod6690dcd0a8cf33572fa9792692755732.slice. Nov 5 15:03:50.304862 kubelet[2923]: E1105 15:03:50.304741 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:50.311782 systemd[1]: Created slice kubepods-burstable-podf9ab21e3f52a01c4d0903c87d5f0639e.slice - libcontainer container kubepods-burstable-podf9ab21e3f52a01c4d0903c87d5f0639e.slice. Nov 5 15:03:50.327261 kubelet[2923]: E1105 15:03:50.327207 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:50.332998 systemd[1]: Created slice kubepods-burstable-pod0dcf9cf1dc36cf25a5f92d51a1910916.slice - libcontainer container kubepods-burstable-pod0dcf9cf1dc36cf25a5f92d51a1910916.slice. Nov 5 15:03:50.334562 kubelet[2923]: I1105 15:03:50.334495 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:50.334722 kubelet[2923]: I1105 15:03:50.334562 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:50.334722 kubelet[2923]: I1105 15:03:50.334608 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:50.334722 kubelet[2923]: I1105 15:03:50.334648 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dcf9cf1dc36cf25a5f92d51a1910916-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-78\" (UID: \"0dcf9cf1dc36cf25a5f92d51a1910916\") " pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:50.334722 kubelet[2923]: I1105 15:03:50.334684 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:50.334722 kubelet[2923]: I1105 15:03:50.334719 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:50.335017 kubelet[2923]: I1105 15:03:50.334752 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:50.335017 kubelet[2923]: I1105 15:03:50.334800 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-ca-certs\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:50.335017 kubelet[2923]: I1105 15:03:50.334836 2923 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:50.340547 kubelet[2923]: E1105 15:03:50.340489 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:50.353588 kubelet[2923]: I1105 15:03:50.353532 2923 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-78" Nov 5 15:03:50.354227 kubelet[2923]: E1105 15:03:50.354143 2923 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.78:6443/api/v1/nodes\": dial tcp 172.31.23.78:6443: connect: connection refused" node="ip-172-31-23-78" Nov 5 15:03:50.378799 kubelet[2923]: E1105 15:03:50.378701 2923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-78?timeout=10s\": dial tcp 172.31.23.78:6443: connect: connection refused" interval="400ms" Nov 5 15:03:50.557588 kubelet[2923]: I1105 15:03:50.556412 2923 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-78" Nov 5 15:03:50.557588 kubelet[2923]: E1105 15:03:50.556935 2923 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.78:6443/api/v1/nodes\": dial tcp 172.31.23.78:6443: connect: connection refused" node="ip-172-31-23-78" Nov 5 15:03:50.608309 containerd[1997]: time="2025-11-05T15:03:50.608223162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-78,Uid:6690dcd0a8cf33572fa9792692755732,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:50.629677 containerd[1997]: time="2025-11-05T15:03:50.629269675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-78,Uid:f9ab21e3f52a01c4d0903c87d5f0639e,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:50.643339 containerd[1997]: time="2025-11-05T15:03:50.643288627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-78,Uid:0dcf9cf1dc36cf25a5f92d51a1910916,Namespace:kube-system,Attempt:0,}" Nov 5 15:03:50.652391 containerd[1997]: time="2025-11-05T15:03:50.652293235Z" level=info msg="connecting to shim 6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff" address="unix:///run/containerd/s/69c600b3aa92dd3ee9e6f84adb7105e1e63863d38762e05b2145be4f612e30e1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:50.718211 systemd[1]: Started cri-containerd-6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff.scope - libcontainer container 6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff. Nov 5 15:03:50.720032 containerd[1997]: time="2025-11-05T15:03:50.718941223Z" level=info msg="connecting to shim 4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8" address="unix:///run/containerd/s/103b719becc3f9f50e92dca0193c7fc911957fd31522fa69dc7ccf27f0ef8b7f" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:50.746936 containerd[1997]: time="2025-11-05T15:03:50.745215535Z" level=info msg="connecting to shim 5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af" address="unix:///run/containerd/s/b3106deb3a800ba1e9662e941f3f73aec7e98579dc994babc2f64f64f6506dac" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:03:50.779848 kubelet[2923]: E1105 15:03:50.779792 2923 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-78?timeout=10s\": dial tcp 172.31.23.78:6443: connect: connection refused" interval="800ms" Nov 5 15:03:50.818440 systemd[1]: Started cri-containerd-4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8.scope - libcontainer container 4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8. Nov 5 15:03:50.833224 systemd[1]: Started cri-containerd-5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af.scope - libcontainer container 5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af. Nov 5 15:03:50.878760 containerd[1997]: time="2025-11-05T15:03:50.878481248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-78,Uid:6690dcd0a8cf33572fa9792692755732,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff\"" Nov 5 15:03:50.893698 containerd[1997]: time="2025-11-05T15:03:50.893633012Z" level=info msg="CreateContainer within sandbox \"6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 15:03:50.914169 containerd[1997]: time="2025-11-05T15:03:50.913010288Z" level=info msg="Container 3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:50.941154 containerd[1997]: time="2025-11-05T15:03:50.941066972Z" level=info msg="CreateContainer within sandbox \"6c6870aada4dfadbb488b3a0116be79351a3bfd3ea86118216e0e32d47274cff\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104\"" Nov 5 15:03:50.942230 containerd[1997]: time="2025-11-05T15:03:50.942163088Z" level=info msg="StartContainer for \"3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104\"" Nov 5 15:03:50.948219 containerd[1997]: time="2025-11-05T15:03:50.948141308Z" level=info msg="connecting to shim 3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104" address="unix:///run/containerd/s/69c600b3aa92dd3ee9e6f84adb7105e1e63863d38762e05b2145be4f612e30e1" protocol=ttrpc version=3 Nov 5 15:03:50.961262 kubelet[2923]: I1105 15:03:50.960830 2923 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-78" Nov 5 15:03:50.961425 kubelet[2923]: E1105 15:03:50.961370 2923 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.78:6443/api/v1/nodes\": dial tcp 172.31.23.78:6443: connect: connection refused" node="ip-172-31-23-78" Nov 5 15:03:50.974910 containerd[1997]: time="2025-11-05T15:03:50.974840468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-78,Uid:0dcf9cf1dc36cf25a5f92d51a1910916,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af\"" Nov 5 15:03:50.976809 containerd[1997]: time="2025-11-05T15:03:50.976753748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-78,Uid:f9ab21e3f52a01c4d0903c87d5f0639e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8\"" Nov 5 15:03:50.985517 containerd[1997]: time="2025-11-05T15:03:50.984617984Z" level=info msg="CreateContainer within sandbox \"5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 15:03:50.987725 containerd[1997]: time="2025-11-05T15:03:50.987666188Z" level=info msg="CreateContainer within sandbox \"4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 15:03:51.003342 containerd[1997]: time="2025-11-05T15:03:51.003294028Z" level=info msg="Container aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:51.005995 containerd[1997]: time="2025-11-05T15:03:51.005865616Z" level=info msg="Container 193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:03:51.008430 systemd[1]: Started cri-containerd-3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104.scope - libcontainer container 3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104. Nov 5 15:03:51.037435 containerd[1997]: time="2025-11-05T15:03:51.037374905Z" level=info msg="CreateContainer within sandbox \"4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\"" Nov 5 15:03:51.038069 containerd[1997]: time="2025-11-05T15:03:51.037986209Z" level=info msg="CreateContainer within sandbox \"5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\"" Nov 5 15:03:51.040920 containerd[1997]: time="2025-11-05T15:03:51.040826501Z" level=info msg="StartContainer for \"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\"" Nov 5 15:03:51.041542 containerd[1997]: time="2025-11-05T15:03:51.041461913Z" level=info msg="StartContainer for \"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\"" Nov 5 15:03:51.045535 containerd[1997]: time="2025-11-05T15:03:51.045472457Z" level=info msg="connecting to shim 193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110" address="unix:///run/containerd/s/103b719becc3f9f50e92dca0193c7fc911957fd31522fa69dc7ccf27f0ef8b7f" protocol=ttrpc version=3 Nov 5 15:03:51.046790 containerd[1997]: time="2025-11-05T15:03:51.046721285Z" level=info msg="connecting to shim aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907" address="unix:///run/containerd/s/b3106deb3a800ba1e9662e941f3f73aec7e98579dc994babc2f64f64f6506dac" protocol=ttrpc version=3 Nov 5 15:03:51.091144 systemd[1]: Started cri-containerd-aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907.scope - libcontainer container aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907. Nov 5 15:03:51.106459 systemd[1]: Started cri-containerd-193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110.scope - libcontainer container 193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110. Nov 5 15:03:51.160204 containerd[1997]: time="2025-11-05T15:03:51.160052525Z" level=info msg="StartContainer for \"3328f4c06ea49d20410b6f6213fecce8317c3e96cc062338cbc97aaecc1c0104\" returns successfully" Nov 5 15:03:51.218335 kubelet[2923]: E1105 15:03:51.218052 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:51.221984 kubelet[2923]: E1105 15:03:51.221917 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.23.78:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 5 15:03:51.326454 containerd[1997]: time="2025-11-05T15:03:51.326345262Z" level=info msg="StartContainer for \"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\" returns successfully" Nov 5 15:03:51.362802 containerd[1997]: time="2025-11-05T15:03:51.362563446Z" level=info msg="StartContainer for \"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\" returns successfully" Nov 5 15:03:51.422029 kubelet[2923]: E1105 15:03:51.421979 2923 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.23.78:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-78&limit=500&resourceVersion=0\": dial tcp 172.31.23.78:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 5 15:03:51.765149 kubelet[2923]: I1105 15:03:51.765101 2923 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-78" Nov 5 15:03:52.227772 kubelet[2923]: E1105 15:03:52.227720 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:52.236549 kubelet[2923]: E1105 15:03:52.236479 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:52.239377 kubelet[2923]: E1105 15:03:52.239332 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:53.237484 kubelet[2923]: E1105 15:03:53.237423 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:53.240212 kubelet[2923]: E1105 15:03:53.240012 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:54.241075 kubelet[2923]: E1105 15:03:54.240140 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:54.241075 kubelet[2923]: E1105 15:03:54.240711 2923 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-78\" not found" node="ip-172-31-23-78" Nov 5 15:03:54.584489 kubelet[2923]: I1105 15:03:54.584317 2923 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-78" Nov 5 15:03:54.633349 kubelet[2923]: E1105 15:03:54.632808 2923 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ip-172-31-23-78.187524900121aaf0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-78,UID:ip-172-31-23-78,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-78,},FirstTimestamp:2025-11-05 15:03:50.09829144 +0000 UTC m=+1.297470644,LastTimestamp:2025-11-05 15:03:50.09829144 +0000 UTC m=+1.297470644,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-78,}" Nov 5 15:03:54.650115 kubelet[2923]: I1105 15:03:54.650070 2923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:54.685542 kubelet[2923]: E1105 15:03:54.685481 2923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:54.685542 kubelet[2923]: I1105 15:03:54.685532 2923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:54.691818 kubelet[2923]: E1105 15:03:54.691757 2923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:54.691818 kubelet[2923]: I1105 15:03:54.691808 2923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:54.695634 kubelet[2923]: E1105 15:03:54.694920 2923 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-78\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:55.105949 kubelet[2923]: I1105 15:03:55.104543 2923 apiserver.go:52] "Watching apiserver" Nov 5 15:03:55.132978 kubelet[2923]: I1105 15:03:55.132919 2923 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:03:55.238558 kubelet[2923]: I1105 15:03:55.238380 2923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:57.169302 kubelet[2923]: I1105 15:03:57.169253 2923 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:58.025808 systemd[1]: Reload requested from client PID 3210 ('systemctl') (unit session-7.scope)... Nov 5 15:03:58.025839 systemd[1]: Reloading... Nov 5 15:03:58.229952 zram_generator::config[3258]: No configuration found. Nov 5 15:03:58.742542 systemd[1]: Reloading finished in 716 ms. Nov 5 15:03:58.786702 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:58.801779 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 15:03:58.802277 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:58.802362 systemd[1]: kubelet.service: Consumed 2.079s CPU time, 126.6M memory peak. Nov 5 15:03:58.806609 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 15:03:59.154861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 15:03:59.176115 (kubelet)[3315]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 15:03:59.283567 kubelet[3315]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:03:59.283567 kubelet[3315]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 15:03:59.283567 kubelet[3315]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 15:03:59.283567 kubelet[3315]: I1105 15:03:59.283403 3315 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 15:03:59.302921 kubelet[3315]: I1105 15:03:59.302394 3315 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Nov 5 15:03:59.302921 kubelet[3315]: I1105 15:03:59.302439 3315 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 15:03:59.302921 kubelet[3315]: I1105 15:03:59.302815 3315 server.go:956] "Client rotation is on, will bootstrap in background" Nov 5 15:03:59.305483 kubelet[3315]: I1105 15:03:59.305443 3315 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 5 15:03:59.309997 kubelet[3315]: I1105 15:03:59.309929 3315 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 15:03:59.333948 kubelet[3315]: I1105 15:03:59.332470 3315 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 15:03:59.350387 kubelet[3315]: I1105 15:03:59.350341 3315 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 15:03:59.351035 kubelet[3315]: I1105 15:03:59.350984 3315 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 15:03:59.352913 kubelet[3315]: I1105 15:03:59.351944 3315 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-78","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 15:03:59.352913 kubelet[3315]: I1105 15:03:59.352643 3315 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 15:03:59.352913 kubelet[3315]: I1105 15:03:59.352666 3315 container_manager_linux.go:303] "Creating device plugin manager" Nov 5 15:03:59.352913 kubelet[3315]: I1105 15:03:59.352755 3315 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:03:59.355988 kubelet[3315]: I1105 15:03:59.355949 3315 kubelet.go:480] "Attempting to sync node with API server" Nov 5 15:03:59.356922 kubelet[3315]: I1105 15:03:59.356168 3315 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 15:03:59.356922 kubelet[3315]: I1105 15:03:59.356228 3315 kubelet.go:386] "Adding apiserver pod source" Nov 5 15:03:59.356922 kubelet[3315]: I1105 15:03:59.356259 3315 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 15:03:59.363868 kubelet[3315]: I1105 15:03:59.363826 3315 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 15:03:59.368156 kubelet[3315]: I1105 15:03:59.368111 3315 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 5 15:03:59.374921 kubelet[3315]: I1105 15:03:59.374761 3315 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 15:03:59.375166 kubelet[3315]: I1105 15:03:59.375145 3315 server.go:1289] "Started kubelet" Nov 5 15:03:59.391913 kubelet[3315]: I1105 15:03:59.389753 3315 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 15:03:59.399780 kubelet[3315]: I1105 15:03:59.399125 3315 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 15:03:59.402922 kubelet[3315]: I1105 15:03:59.392855 3315 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 15:03:59.414318 kubelet[3315]: I1105 15:03:59.412614 3315 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 15:03:59.414453 kubelet[3315]: E1105 15:03:59.414259 3315 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-78\" not found" Nov 5 15:03:59.418179 kubelet[3315]: I1105 15:03:59.394968 3315 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 15:03:59.420933 kubelet[3315]: I1105 15:03:59.420741 3315 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 15:03:59.426599 kubelet[3315]: I1105 15:03:59.426558 3315 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 15:03:59.429092 kubelet[3315]: I1105 15:03:59.428188 3315 reconciler.go:26] "Reconciler: start to sync state" Nov 5 15:03:59.456510 kubelet[3315]: I1105 15:03:59.456454 3315 server.go:317] "Adding debug handlers to kubelet server" Nov 5 15:03:59.465515 kubelet[3315]: I1105 15:03:59.465444 3315 factory.go:223] Registration of the systemd container factory successfully Nov 5 15:03:59.465712 kubelet[3315]: I1105 15:03:59.465665 3315 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 15:03:59.470580 kubelet[3315]: E1105 15:03:59.470524 3315 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 15:03:59.478244 kubelet[3315]: I1105 15:03:59.478193 3315 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Nov 5 15:03:59.484120 kubelet[3315]: I1105 15:03:59.484084 3315 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Nov 5 15:03:59.485544 kubelet[3315]: I1105 15:03:59.484996 3315 status_manager.go:230] "Starting to sync pod status with apiserver" Nov 5 15:03:59.485544 kubelet[3315]: I1105 15:03:59.485040 3315 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 15:03:59.485544 kubelet[3315]: I1105 15:03:59.485055 3315 kubelet.go:2436] "Starting kubelet main sync loop" Nov 5 15:03:59.485544 kubelet[3315]: E1105 15:03:59.485130 3315 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 15:03:59.501599 kubelet[3315]: I1105 15:03:59.501562 3315 factory.go:223] Registration of the containerd container factory successfully Nov 5 15:03:59.515027 kubelet[3315]: E1105 15:03:59.514988 3315 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-78\" not found" Nov 5 15:03:59.585491 kubelet[3315]: E1105 15:03:59.585205 3315 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664082 3315 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664113 3315 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664149 3315 state_mem.go:36] "Initialized new in-memory state store" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664372 3315 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664401 3315 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664434 3315 policy_none.go:49] "None policy: Start" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664453 3315 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664472 3315 state_mem.go:35] "Initializing new in-memory state store" Nov 5 15:03:59.665048 kubelet[3315]: I1105 15:03:59.664638 3315 state_mem.go:75] "Updated machine memory state" Nov 5 15:03:59.685721 kubelet[3315]: E1105 15:03:59.685661 3315 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 5 15:03:59.688939 kubelet[3315]: I1105 15:03:59.687678 3315 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 15:03:59.688939 kubelet[3315]: I1105 15:03:59.688613 3315 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 15:03:59.691457 kubelet[3315]: I1105 15:03:59.690669 3315 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 15:03:59.695528 kubelet[3315]: E1105 15:03:59.695479 3315 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 15:03:59.755037 update_engine[1951]: I20251105 15:03:59.754949 1951 update_attempter.cc:509] Updating boot flags... Nov 5 15:03:59.788244 kubelet[3315]: I1105 15:03:59.788203 3315 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:59.791111 kubelet[3315]: I1105 15:03:59.790549 3315 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:59.795744 kubelet[3315]: I1105 15:03:59.793478 3315 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.810746 kubelet[3315]: E1105 15:03:59.809243 3315 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-78\" already exists" pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:59.810746 kubelet[3315]: E1105 15:03:59.809366 3315 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-78\" already exists" pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:59.824947 kubelet[3315]: I1105 15:03:59.824295 3315 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-78" Nov 5 15:03:59.831302 kubelet[3315]: I1105 15:03:59.830745 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.831302 kubelet[3315]: I1105 15:03:59.830815 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-ca-certs\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:59.831302 kubelet[3315]: I1105 15:03:59.830857 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:59.831302 kubelet[3315]: I1105 15:03:59.830957 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.831302 kubelet[3315]: I1105 15:03:59.830997 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.831757 kubelet[3315]: I1105 15:03:59.831033 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0dcf9cf1dc36cf25a5f92d51a1910916-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-78\" (UID: \"0dcf9cf1dc36cf25a5f92d51a1910916\") " pod="kube-system/kube-scheduler-ip-172-31-23-78" Nov 5 15:03:59.831757 kubelet[3315]: I1105 15:03:59.831067 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6690dcd0a8cf33572fa9792692755732-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-78\" (UID: \"6690dcd0a8cf33572fa9792692755732\") " pod="kube-system/kube-apiserver-ip-172-31-23-78" Nov 5 15:03:59.831757 kubelet[3315]: I1105 15:03:59.831102 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.831757 kubelet[3315]: I1105 15:03:59.831143 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f9ab21e3f52a01c4d0903c87d5f0639e-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-78\" (UID: \"f9ab21e3f52a01c4d0903c87d5f0639e\") " pod="kube-system/kube-controller-manager-ip-172-31-23-78" Nov 5 15:03:59.877347 kubelet[3315]: I1105 15:03:59.873985 3315 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-78" Nov 5 15:03:59.877347 kubelet[3315]: I1105 15:03:59.874098 3315 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-78" Nov 5 15:04:00.359585 kubelet[3315]: I1105 15:04:00.359514 3315 apiserver.go:52] "Watching apiserver" Nov 5 15:04:00.428227 kubelet[3315]: I1105 15:04:00.428100 3315 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 15:04:00.581259 kubelet[3315]: I1105 15:04:00.580678 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-78" podStartSLOduration=5.58065592 podStartE2EDuration="5.58065592s" podCreationTimestamp="2025-11-05 15:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:04:00.528513004 +0000 UTC m=+1.341032924" watchObservedRunningTime="2025-11-05 15:04:00.58065592 +0000 UTC m=+1.393175828" Nov 5 15:04:00.614739 kubelet[3315]: I1105 15:04:00.614578 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-78" podStartSLOduration=3.6145545759999997 podStartE2EDuration="3.614554576s" podCreationTimestamp="2025-11-05 15:03:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:04:00.581060416 +0000 UTC m=+1.393580324" watchObservedRunningTime="2025-11-05 15:04:00.614554576 +0000 UTC m=+1.427074472" Nov 5 15:04:00.641284 kubelet[3315]: I1105 15:04:00.641195 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-78" podStartSLOduration=1.6411722439999998 podStartE2EDuration="1.641172244s" podCreationTimestamp="2025-11-05 15:03:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:04:00.615411388 +0000 UTC m=+1.427931284" watchObservedRunningTime="2025-11-05 15:04:00.641172244 +0000 UTC m=+1.453692176" Nov 5 15:04:02.917798 kubelet[3315]: I1105 15:04:02.917724 3315 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 15:04:02.919485 containerd[1997]: time="2025-11-05T15:04:02.919429016Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 15:04:02.920870 kubelet[3315]: I1105 15:04:02.919820 3315 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 15:04:03.625880 systemd[1]: Created slice kubepods-besteffort-pode7603134_5dac_4f8b_837d_99eabd361f43.slice - libcontainer container kubepods-besteffort-pode7603134_5dac_4f8b_837d_99eabd361f43.slice. Nov 5 15:04:03.658579 kubelet[3315]: I1105 15:04:03.658504 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e7603134-5dac-4f8b-837d-99eabd361f43-var-lib-calico\") pod \"tigera-operator-7dcd859c48-rcmtk\" (UID: \"e7603134-5dac-4f8b-837d-99eabd361f43\") " pod="tigera-operator/tigera-operator-7dcd859c48-rcmtk" Nov 5 15:04:03.658936 kubelet[3315]: I1105 15:04:03.658694 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6gl9\" (UniqueName: \"kubernetes.io/projected/e7603134-5dac-4f8b-837d-99eabd361f43-kube-api-access-r6gl9\") pod \"tigera-operator-7dcd859c48-rcmtk\" (UID: \"e7603134-5dac-4f8b-837d-99eabd361f43\") " pod="tigera-operator/tigera-operator-7dcd859c48-rcmtk" Nov 5 15:04:03.714552 systemd[1]: Created slice kubepods-besteffort-pod7f9c77ac_382e_47e9_aeef_4f3c65516185.slice - libcontainer container kubepods-besteffort-pod7f9c77ac_382e_47e9_aeef_4f3c65516185.slice. Nov 5 15:04:03.759648 kubelet[3315]: I1105 15:04:03.759581 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f9c77ac-382e-47e9-aeef-4f3c65516185-xtables-lock\") pod \"kube-proxy-mgjk9\" (UID: \"7f9c77ac-382e-47e9-aeef-4f3c65516185\") " pod="kube-system/kube-proxy-mgjk9" Nov 5 15:04:03.759804 kubelet[3315]: I1105 15:04:03.759700 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7f9c77ac-382e-47e9-aeef-4f3c65516185-kube-proxy\") pod \"kube-proxy-mgjk9\" (UID: \"7f9c77ac-382e-47e9-aeef-4f3c65516185\") " pod="kube-system/kube-proxy-mgjk9" Nov 5 15:04:03.759879 kubelet[3315]: I1105 15:04:03.759814 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f9c77ac-382e-47e9-aeef-4f3c65516185-lib-modules\") pod \"kube-proxy-mgjk9\" (UID: \"7f9c77ac-382e-47e9-aeef-4f3c65516185\") " pod="kube-system/kube-proxy-mgjk9" Nov 5 15:04:03.759971 kubelet[3315]: I1105 15:04:03.759950 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxwnl\" (UniqueName: \"kubernetes.io/projected/7f9c77ac-382e-47e9-aeef-4f3c65516185-kube-api-access-cxwnl\") pod \"kube-proxy-mgjk9\" (UID: \"7f9c77ac-382e-47e9-aeef-4f3c65516185\") " pod="kube-system/kube-proxy-mgjk9" Nov 5 15:04:03.940624 containerd[1997]: time="2025-11-05T15:04:03.940275201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rcmtk,Uid:e7603134-5dac-4f8b-837d-99eabd361f43,Namespace:tigera-operator,Attempt:0,}" Nov 5 15:04:03.971086 containerd[1997]: time="2025-11-05T15:04:03.971003445Z" level=info msg="connecting to shim ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f" address="unix:///run/containerd/s/eee357984149ba1d744ed8e93b8aa04ce7f372f175b7b74aa581763e09560802" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:04:04.022190 systemd[1]: Started cri-containerd-ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f.scope - libcontainer container ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f. Nov 5 15:04:04.025171 containerd[1997]: time="2025-11-05T15:04:04.025117721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgjk9,Uid:7f9c77ac-382e-47e9-aeef-4f3c65516185,Namespace:kube-system,Attempt:0,}" Nov 5 15:04:04.071161 containerd[1997]: time="2025-11-05T15:04:04.071031653Z" level=info msg="connecting to shim 7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629" address="unix:///run/containerd/s/c73474f0278a1fa40a4629db3d540df13bb584a9fd0a1125cbe2831ba6fd4fc8" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:04:04.121489 systemd[1]: Started cri-containerd-7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629.scope - libcontainer container 7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629. Nov 5 15:04:04.157340 containerd[1997]: time="2025-11-05T15:04:04.157273110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-rcmtk,Uid:e7603134-5dac-4f8b-837d-99eabd361f43,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f\"" Nov 5 15:04:04.168425 containerd[1997]: time="2025-11-05T15:04:04.168109914Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Nov 5 15:04:04.205222 containerd[1997]: time="2025-11-05T15:04:04.204880734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mgjk9,Uid:7f9c77ac-382e-47e9-aeef-4f3c65516185,Namespace:kube-system,Attempt:0,} returns sandbox id \"7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629\"" Nov 5 15:04:04.216326 containerd[1997]: time="2025-11-05T15:04:04.216256386Z" level=info msg="CreateContainer within sandbox \"7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 15:04:04.229666 containerd[1997]: time="2025-11-05T15:04:04.229591374Z" level=info msg="Container fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:04.242360 containerd[1997]: time="2025-11-05T15:04:04.242281062Z" level=info msg="CreateContainer within sandbox \"7bf5cc339d0994ba63af12d6ab5c0e553baea2965af3dd5e4206fd9fc9630629\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee\"" Nov 5 15:04:04.247789 containerd[1997]: time="2025-11-05T15:04:04.246367590Z" level=info msg="StartContainer for \"fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee\"" Nov 5 15:04:04.252256 containerd[1997]: time="2025-11-05T15:04:04.252047814Z" level=info msg="connecting to shim fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee" address="unix:///run/containerd/s/c73474f0278a1fa40a4629db3d540df13bb584a9fd0a1125cbe2831ba6fd4fc8" protocol=ttrpc version=3 Nov 5 15:04:04.289240 systemd[1]: Started cri-containerd-fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee.scope - libcontainer container fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee. Nov 5 15:04:04.375420 containerd[1997]: time="2025-11-05T15:04:04.375321151Z" level=info msg="StartContainer for \"fb6988c40b8a13ba6ca0a269b8830368167d6ae802435fe8f482c6631d822aee\" returns successfully" Nov 5 15:04:04.676365 kubelet[3315]: I1105 15:04:04.676235 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mgjk9" podStartSLOduration=1.676209932 podStartE2EDuration="1.676209932s" podCreationTimestamp="2025-11-05 15:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:04:04.675594584 +0000 UTC m=+5.488114492" watchObservedRunningTime="2025-11-05 15:04:04.676209932 +0000 UTC m=+5.488729840" Nov 5 15:04:05.558588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3602703724.mount: Deactivated successfully. Nov 5 15:04:06.442220 containerd[1997]: time="2025-11-05T15:04:06.442155117Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:06.443682 containerd[1997]: time="2025-11-05T15:04:06.443446497Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Nov 5 15:04:06.444775 containerd[1997]: time="2025-11-05T15:04:06.444716817Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:06.448321 containerd[1997]: time="2025-11-05T15:04:06.448269621Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:06.449774 containerd[1997]: time="2025-11-05T15:04:06.449696157Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.281518347s" Nov 5 15:04:06.450058 containerd[1997]: time="2025-11-05T15:04:06.449923197Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Nov 5 15:04:06.458003 containerd[1997]: time="2025-11-05T15:04:06.457758597Z" level=info msg="CreateContainer within sandbox \"ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 5 15:04:06.469599 containerd[1997]: time="2025-11-05T15:04:06.469534977Z" level=info msg="Container bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:06.479264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1996336858.mount: Deactivated successfully. Nov 5 15:04:06.487085 containerd[1997]: time="2025-11-05T15:04:06.486959277Z" level=info msg="CreateContainer within sandbox \"ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\"" Nov 5 15:04:06.490540 containerd[1997]: time="2025-11-05T15:04:06.490493493Z" level=info msg="StartContainer for \"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\"" Nov 5 15:04:06.496351 containerd[1997]: time="2025-11-05T15:04:06.496061361Z" level=info msg="connecting to shim bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f" address="unix:///run/containerd/s/eee357984149ba1d744ed8e93b8aa04ce7f372f175b7b74aa581763e09560802" protocol=ttrpc version=3 Nov 5 15:04:06.545223 systemd[1]: Started cri-containerd-bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f.scope - libcontainer container bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f. Nov 5 15:04:06.599925 containerd[1997]: time="2025-11-05T15:04:06.599603170Z" level=info msg="StartContainer for \"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" returns successfully" Nov 5 15:04:08.069543 kubelet[3315]: I1105 15:04:08.068785 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-rcmtk" podStartSLOduration=2.78012487 podStartE2EDuration="5.068762961s" podCreationTimestamp="2025-11-05 15:04:03 +0000 UTC" firstStartedPulling="2025-11-05 15:04:04.162633822 +0000 UTC m=+4.975153742" lastFinishedPulling="2025-11-05 15:04:06.451271925 +0000 UTC m=+7.263791833" observedRunningTime="2025-11-05 15:04:06.674561002 +0000 UTC m=+7.487080910" watchObservedRunningTime="2025-11-05 15:04:08.068762961 +0000 UTC m=+8.881282857" Nov 5 15:04:13.531625 sudo[2341]: pam_unix(sudo:session): session closed for user root Nov 5 15:04:13.557979 sshd[2340]: Connection closed by 139.178.89.65 port 45914 Nov 5 15:04:13.558807 sshd-session[2337]: pam_unix(sshd:session): session closed for user core Nov 5 15:04:13.572601 systemd[1]: sshd@6-172.31.23.78:22-139.178.89.65:45914.service: Deactivated successfully. Nov 5 15:04:13.584703 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 15:04:13.586064 systemd[1]: session-7.scope: Consumed 10.597s CPU time, 224.4M memory peak. Nov 5 15:04:13.592019 systemd-logind[1950]: Session 7 logged out. Waiting for processes to exit. Nov 5 15:04:13.597027 systemd-logind[1950]: Removed session 7. Nov 5 15:04:36.449979 systemd[1]: Created slice kubepods-besteffort-podb415ec0a_44f7_4076_87a9_d591886f3c6c.slice - libcontainer container kubepods-besteffort-podb415ec0a_44f7_4076_87a9_d591886f3c6c.slice. Nov 5 15:04:36.477405 kubelet[3315]: I1105 15:04:36.477188 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b415ec0a-44f7-4076-87a9-d591886f3c6c-tigera-ca-bundle\") pod \"calico-typha-794cff8fc8-z7q6b\" (UID: \"b415ec0a-44f7-4076-87a9-d591886f3c6c\") " pod="calico-system/calico-typha-794cff8fc8-z7q6b" Nov 5 15:04:36.477405 kubelet[3315]: I1105 15:04:36.477258 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lswwb\" (UniqueName: \"kubernetes.io/projected/b415ec0a-44f7-4076-87a9-d591886f3c6c-kube-api-access-lswwb\") pod \"calico-typha-794cff8fc8-z7q6b\" (UID: \"b415ec0a-44f7-4076-87a9-d591886f3c6c\") " pod="calico-system/calico-typha-794cff8fc8-z7q6b" Nov 5 15:04:36.477405 kubelet[3315]: I1105 15:04:36.477302 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/b415ec0a-44f7-4076-87a9-d591886f3c6c-typha-certs\") pod \"calico-typha-794cff8fc8-z7q6b\" (UID: \"b415ec0a-44f7-4076-87a9-d591886f3c6c\") " pod="calico-system/calico-typha-794cff8fc8-z7q6b" Nov 5 15:04:36.733763 kubelet[3315]: I1105 15:04:36.733556 3315 status_manager.go:895] "Failed to get status for pod" podUID="7ad347c7-b4f1-4be5-acfd-375260c5bc71" pod="calico-system/calico-node-thb6w" err="pods \"calico-node-thb6w\" is forbidden: User \"system:node:ip-172-31-23-78\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-78' and this object" Nov 5 15:04:36.734494 kubelet[3315]: E1105 15:04:36.734403 3315 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ip-172-31-23-78\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-78' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"node-certs\"" type="*v1.Secret" Nov 5 15:04:36.735995 kubelet[3315]: E1105 15:04:36.735873 3315 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"cni-config\" is forbidden: User \"system:node:ip-172-31-23-78\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-78' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"cni-config\"" type="*v1.ConfigMap" Nov 5 15:04:36.738772 systemd[1]: Created slice kubepods-besteffort-pod7ad347c7_b4f1_4be5_acfd_375260c5bc71.slice - libcontainer container kubepods-besteffort-pod7ad347c7_b4f1_4be5_acfd_375260c5bc71.slice. Nov 5 15:04:36.759920 containerd[1997]: time="2025-11-05T15:04:36.759185476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794cff8fc8-z7q6b,Uid:b415ec0a-44f7-4076-87a9-d591886f3c6c,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:36.782122 kubelet[3315]: I1105 15:04:36.779735 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-flexvol-driver-host\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.788601 kubelet[3315]: I1105 15:04:36.788031 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-cni-net-dir\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.788601 kubelet[3315]: I1105 15:04:36.788099 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-cni-log-dir\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.788601 kubelet[3315]: I1105 15:04:36.788135 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-policysync\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.788601 kubelet[3315]: I1105 15:04:36.788190 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-xtables-lock\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.788601 kubelet[3315]: I1105 15:04:36.788227 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7ad347c7-b4f1-4be5-acfd-375260c5bc71-node-certs\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789017 kubelet[3315]: I1105 15:04:36.788260 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7ad347c7-b4f1-4be5-acfd-375260c5bc71-tigera-ca-bundle\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789017 kubelet[3315]: I1105 15:04:36.788297 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-lib-modules\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789017 kubelet[3315]: I1105 15:04:36.788329 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-var-run-calico\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789017 kubelet[3315]: I1105 15:04:36.788369 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-cni-bin-dir\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789017 kubelet[3315]: I1105 15:04:36.788405 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m7h5\" (UniqueName: \"kubernetes.io/projected/7ad347c7-b4f1-4be5-acfd-375260c5bc71-kube-api-access-4m7h5\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.789273 kubelet[3315]: I1105 15:04:36.788443 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7ad347c7-b4f1-4be5-acfd-375260c5bc71-var-lib-calico\") pod \"calico-node-thb6w\" (UID: \"7ad347c7-b4f1-4be5-acfd-375260c5bc71\") " pod="calico-system/calico-node-thb6w" Nov 5 15:04:36.817572 containerd[1997]: time="2025-11-05T15:04:36.817475992Z" level=info msg="connecting to shim 78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90" address="unix:///run/containerd/s/ddf9003360401493667d761a88ecaa4b5eef0d24d19d131249ca1d1a5e1efa47" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:04:36.890233 systemd[1]: Started cri-containerd-78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90.scope - libcontainer container 78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90. Nov 5 15:04:36.895531 kubelet[3315]: E1105 15:04:36.895206 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.895531 kubelet[3315]: W1105 15:04:36.895245 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.895531 kubelet[3315]: E1105 15:04:36.895287 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.896680 kubelet[3315]: E1105 15:04:36.896630 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.896932 kubelet[3315]: W1105 15:04:36.896825 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.896932 kubelet[3315]: E1105 15:04:36.896865 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.898558 kubelet[3315]: E1105 15:04:36.897609 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.898558 kubelet[3315]: W1105 15:04:36.898372 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.898558 kubelet[3315]: E1105 15:04:36.898416 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.899585 kubelet[3315]: E1105 15:04:36.899213 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.899585 kubelet[3315]: W1105 15:04:36.899245 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.899585 kubelet[3315]: E1105 15:04:36.899275 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.901018 kubelet[3315]: E1105 15:04:36.900688 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.901018 kubelet[3315]: W1105 15:04:36.900729 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.901018 kubelet[3315]: E1105 15:04:36.900761 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.902590 kubelet[3315]: E1105 15:04:36.902554 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.903993 kubelet[3315]: W1105 15:04:36.903929 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.904167 kubelet[3315]: E1105 15:04:36.904141 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.904676 kubelet[3315]: E1105 15:04:36.904647 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.904918 kubelet[3315]: W1105 15:04:36.904803 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.904918 kubelet[3315]: E1105 15:04:36.904837 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.905311 kubelet[3315]: E1105 15:04:36.905287 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.906280 kubelet[3315]: W1105 15:04:36.905971 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.906280 kubelet[3315]: E1105 15:04:36.906014 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.907913 kubelet[3315]: E1105 15:04:36.907026 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.908149 kubelet[3315]: W1105 15:04:36.908111 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.908271 kubelet[3315]: E1105 15:04:36.908247 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.908847 kubelet[3315]: E1105 15:04:36.908815 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.909159 kubelet[3315]: W1105 15:04:36.909061 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.909159 kubelet[3315]: E1105 15:04:36.909103 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.930287 kubelet[3315]: E1105 15:04:36.930029 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:36.931201 kubelet[3315]: W1105 15:04:36.930067 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:36.931201 kubelet[3315]: E1105 15:04:36.930490 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:36.973070 kubelet[3315]: E1105 15:04:36.972381 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:37.064447 kubelet[3315]: E1105 15:04:37.064036 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.064447 kubelet[3315]: W1105 15:04:37.064070 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.064447 kubelet[3315]: E1105 15:04:37.064102 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.066523 kubelet[3315]: E1105 15:04:37.065991 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.067808 kubelet[3315]: W1105 15:04:37.067206 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.067808 kubelet[3315]: E1105 15:04:37.067297 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.068941 kubelet[3315]: E1105 15:04:37.068709 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.070245 kubelet[3315]: W1105 15:04:37.070198 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.070437 kubelet[3315]: E1105 15:04:37.070412 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.072060 kubelet[3315]: E1105 15:04:37.071759 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.072652 kubelet[3315]: W1105 15:04:37.071792 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.072652 kubelet[3315]: E1105 15:04:37.072281 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.073674 kubelet[3315]: E1105 15:04:37.073162 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.073674 kubelet[3315]: W1105 15:04:37.073317 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.073674 kubelet[3315]: E1105 15:04:37.073353 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.074266 kubelet[3315]: E1105 15:04:37.074217 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.074854 kubelet[3315]: W1105 15:04:37.074471 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.074854 kubelet[3315]: E1105 15:04:37.074512 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.076123 kubelet[3315]: E1105 15:04:37.076085 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.076547 kubelet[3315]: W1105 15:04:37.076298 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.076547 kubelet[3315]: E1105 15:04:37.076338 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.076813 kubelet[3315]: E1105 15:04:37.076790 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.077238 kubelet[3315]: W1105 15:04:37.076923 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.077238 kubelet[3315]: E1105 15:04:37.076981 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.078092 kubelet[3315]: E1105 15:04:37.078055 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.078937 kubelet[3315]: W1105 15:04:37.078265 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.078937 kubelet[3315]: E1105 15:04:37.078306 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.080205 kubelet[3315]: E1105 15:04:37.079338 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.080205 kubelet[3315]: W1105 15:04:37.079372 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.080205 kubelet[3315]: E1105 15:04:37.079403 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.080912 kubelet[3315]: E1105 15:04:37.080696 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.080912 kubelet[3315]: W1105 15:04:37.080730 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.080912 kubelet[3315]: E1105 15:04:37.080761 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.082283 kubelet[3315]: E1105 15:04:37.082245 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.082704 kubelet[3315]: W1105 15:04:37.082455 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.082704 kubelet[3315]: E1105 15:04:37.082495 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.083559 kubelet[3315]: E1105 15:04:37.083030 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.083559 kubelet[3315]: W1105 15:04:37.083056 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.083559 kubelet[3315]: E1105 15:04:37.083081 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.084300 kubelet[3315]: E1105 15:04:37.084265 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.085190 kubelet[3315]: W1105 15:04:37.084939 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.085190 kubelet[3315]: E1105 15:04:37.084985 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.087512 kubelet[3315]: E1105 15:04:37.087234 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.087512 kubelet[3315]: W1105 15:04:37.087270 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.087512 kubelet[3315]: E1105 15:04:37.087302 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.088749 kubelet[3315]: E1105 15:04:37.088712 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.089042 kubelet[3315]: W1105 15:04:37.089012 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.089392 kubelet[3315]: E1105 15:04:37.089154 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.090165 kubelet[3315]: E1105 15:04:37.089837 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.090165 kubelet[3315]: W1105 15:04:37.089963 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.090165 kubelet[3315]: E1105 15:04:37.089994 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.091953 kubelet[3315]: E1105 15:04:37.091129 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.091953 kubelet[3315]: W1105 15:04:37.091164 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.091953 kubelet[3315]: E1105 15:04:37.091195 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.092827 kubelet[3315]: E1105 15:04:37.092595 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.092827 kubelet[3315]: W1105 15:04:37.092624 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.092827 kubelet[3315]: E1105 15:04:37.092653 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.093251 kubelet[3315]: E1105 15:04:37.093227 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.093361 kubelet[3315]: W1105 15:04:37.093338 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.093493 kubelet[3315]: E1105 15:04:37.093449 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.094541 kubelet[3315]: E1105 15:04:37.094148 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.094946 kubelet[3315]: W1105 15:04:37.094709 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.094946 kubelet[3315]: E1105 15:04:37.094754 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.094946 kubelet[3315]: I1105 15:04:37.094812 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572-socket-dir\") pod \"csi-node-driver-dbscs\" (UID: \"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572\") " pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:37.096463 kubelet[3315]: E1105 15:04:37.096421 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.096925 kubelet[3315]: W1105 15:04:37.096651 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.097182 kubelet[3315]: E1105 15:04:37.097043 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.097182 kubelet[3315]: I1105 15:04:37.097117 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mt6w6\" (UniqueName: \"kubernetes.io/projected/80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572-kube-api-access-mt6w6\") pod \"csi-node-driver-dbscs\" (UID: \"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572\") " pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:37.100110 kubelet[3315]: E1105 15:04:37.100041 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.100110 kubelet[3315]: W1105 15:04:37.100083 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.100677 kubelet[3315]: E1105 15:04:37.100118 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.100677 kubelet[3315]: E1105 15:04:37.100504 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.100677 kubelet[3315]: W1105 15:04:37.100524 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.100677 kubelet[3315]: E1105 15:04:37.100545 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.102731 kubelet[3315]: E1105 15:04:37.100822 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.102731 kubelet[3315]: W1105 15:04:37.100836 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.102731 kubelet[3315]: E1105 15:04:37.100855 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.102731 kubelet[3315]: I1105 15:04:37.100927 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572-kubelet-dir\") pod \"csi-node-driver-dbscs\" (UID: \"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572\") " pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:37.102731 kubelet[3315]: E1105 15:04:37.101429 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.102731 kubelet[3315]: W1105 15:04:37.101450 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.102731 kubelet[3315]: E1105 15:04:37.101478 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.102731 kubelet[3315]: I1105 15:04:37.101512 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572-varrun\") pod \"csi-node-driver-dbscs\" (UID: \"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572\") " pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:37.102731 kubelet[3315]: E1105 15:04:37.102513 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.103329 kubelet[3315]: W1105 15:04:37.102543 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.103329 kubelet[3315]: E1105 15:04:37.102570 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.104244 kubelet[3315]: E1105 15:04:37.103742 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.104244 kubelet[3315]: W1105 15:04:37.103782 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.104244 kubelet[3315]: E1105 15:04:37.103814 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.105440 kubelet[3315]: E1105 15:04:37.105213 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.105618 kubelet[3315]: W1105 15:04:37.105399 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.105683 kubelet[3315]: E1105 15:04:37.105639 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.107067 kubelet[3315]: I1105 15:04:37.106932 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572-registration-dir\") pod \"csi-node-driver-dbscs\" (UID: \"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572\") " pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:37.107236 kubelet[3315]: E1105 15:04:37.107089 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.107236 kubelet[3315]: W1105 15:04:37.107108 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.107236 kubelet[3315]: E1105 15:04:37.107153 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.108532 kubelet[3315]: E1105 15:04:37.108481 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.108532 kubelet[3315]: W1105 15:04:37.108519 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.108805 kubelet[3315]: E1105 15:04:37.108550 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.110536 kubelet[3315]: E1105 15:04:37.110482 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.110536 kubelet[3315]: W1105 15:04:37.110523 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.110780 kubelet[3315]: E1105 15:04:37.110557 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.112308 kubelet[3315]: E1105 15:04:37.112264 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.112308 kubelet[3315]: W1105 15:04:37.112303 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.112701 kubelet[3315]: E1105 15:04:37.112336 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.113606 kubelet[3315]: E1105 15:04:37.113567 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.113606 kubelet[3315]: W1105 15:04:37.113604 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.113606 kubelet[3315]: E1105 15:04:37.113635 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.115361 kubelet[3315]: E1105 15:04:37.115311 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.115361 kubelet[3315]: W1105 15:04:37.115350 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.115642 kubelet[3315]: E1105 15:04:37.115382 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.184614 containerd[1997]: time="2025-11-05T15:04:37.184361702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-794cff8fc8-z7q6b,Uid:b415ec0a-44f7-4076-87a9-d591886f3c6c,Namespace:calico-system,Attempt:0,} returns sandbox id \"78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90\"" Nov 5 15:04:37.191144 containerd[1997]: time="2025-11-05T15:04:37.191062250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Nov 5 15:04:37.209996 kubelet[3315]: E1105 15:04:37.209951 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.209996 kubelet[3315]: W1105 15:04:37.209984 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.210159 kubelet[3315]: E1105 15:04:37.210014 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.210781 kubelet[3315]: E1105 15:04:37.210708 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.210781 kubelet[3315]: W1105 15:04:37.210742 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.210781 kubelet[3315]: E1105 15:04:37.210770 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.211540 kubelet[3315]: E1105 15:04:37.211460 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.211540 kubelet[3315]: W1105 15:04:37.211491 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.211540 kubelet[3315]: E1105 15:04:37.211516 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.211910 kubelet[3315]: E1105 15:04:37.211825 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.211910 kubelet[3315]: W1105 15:04:37.211853 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.211910 kubelet[3315]: E1105 15:04:37.211874 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.212380 kubelet[3315]: E1105 15:04:37.212351 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.212477 kubelet[3315]: W1105 15:04:37.212378 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.212477 kubelet[3315]: E1105 15:04:37.212401 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.212743 kubelet[3315]: E1105 15:04:37.212715 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.212743 kubelet[3315]: W1105 15:04:37.212740 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.213109 kubelet[3315]: E1105 15:04:37.212761 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.213109 kubelet[3315]: E1105 15:04:37.213084 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.213109 kubelet[3315]: W1105 15:04:37.213101 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.213416 kubelet[3315]: E1105 15:04:37.213121 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.213416 kubelet[3315]: E1105 15:04:37.213380 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.213416 kubelet[3315]: W1105 15:04:37.213395 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.213416 kubelet[3315]: E1105 15:04:37.213413 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.214079 kubelet[3315]: E1105 15:04:37.214048 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.214257 kubelet[3315]: W1105 15:04:37.214077 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.214257 kubelet[3315]: E1105 15:04:37.214101 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.214587 kubelet[3315]: E1105 15:04:37.214558 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.214716 kubelet[3315]: W1105 15:04:37.214586 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.214716 kubelet[3315]: E1105 15:04:37.214636 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.215865 kubelet[3315]: E1105 15:04:37.215816 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.215865 kubelet[3315]: W1105 15:04:37.215852 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.216188 kubelet[3315]: E1105 15:04:37.215880 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.216300 kubelet[3315]: E1105 15:04:37.216255 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.216300 kubelet[3315]: W1105 15:04:37.216273 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.216300 kubelet[3315]: E1105 15:04:37.216292 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.216723 kubelet[3315]: E1105 15:04:37.216708 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.216779 kubelet[3315]: W1105 15:04:37.216731 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.216866 kubelet[3315]: E1105 15:04:37.216787 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.217267 kubelet[3315]: E1105 15:04:37.217240 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.217357 kubelet[3315]: W1105 15:04:37.217266 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.217357 kubelet[3315]: E1105 15:04:37.217289 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.217689 kubelet[3315]: E1105 15:04:37.217662 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.217768 kubelet[3315]: W1105 15:04:37.217687 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.217768 kubelet[3315]: E1105 15:04:37.217710 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.218041 kubelet[3315]: E1105 15:04:37.218014 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.218128 kubelet[3315]: W1105 15:04:37.218040 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.218128 kubelet[3315]: E1105 15:04:37.218062 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.218353 kubelet[3315]: E1105 15:04:37.218327 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.218430 kubelet[3315]: W1105 15:04:37.218352 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.218430 kubelet[3315]: E1105 15:04:37.218372 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.218767 kubelet[3315]: E1105 15:04:37.218742 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.218836 kubelet[3315]: W1105 15:04:37.218766 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.218836 kubelet[3315]: E1105 15:04:37.218785 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.219338 kubelet[3315]: E1105 15:04:37.219314 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.219526 kubelet[3315]: W1105 15:04:37.219430 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.219526 kubelet[3315]: E1105 15:04:37.219459 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.219778 kubelet[3315]: E1105 15:04:37.219751 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.219856 kubelet[3315]: W1105 15:04:37.219777 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.219856 kubelet[3315]: E1105 15:04:37.219797 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.220261 kubelet[3315]: E1105 15:04:37.220233 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.220342 kubelet[3315]: W1105 15:04:37.220259 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.220342 kubelet[3315]: E1105 15:04:37.220280 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.220853 kubelet[3315]: E1105 15:04:37.220538 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.220853 kubelet[3315]: W1105 15:04:37.220554 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.220853 kubelet[3315]: E1105 15:04:37.220574 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.223536 kubelet[3315]: E1105 15:04:37.223474 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.223970 kubelet[3315]: W1105 15:04:37.223776 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.223970 kubelet[3315]: E1105 15:04:37.223814 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.224715 kubelet[3315]: E1105 15:04:37.224666 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.224820 kubelet[3315]: W1105 15:04:37.224694 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.225027 kubelet[3315]: E1105 15:04:37.224938 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.225907 kubelet[3315]: E1105 15:04:37.225752 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.226074 kubelet[3315]: W1105 15:04:37.225867 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.226251 kubelet[3315]: E1105 15:04:37.226165 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.241763 kubelet[3315]: E1105 15:04:37.241729 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.242038 kubelet[3315]: W1105 15:04:37.241945 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.242038 kubelet[3315]: E1105 15:04:37.241984 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:37.898912 kubelet[3315]: E1105 15:04:37.898721 3315 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Nov 5 15:04:37.898912 kubelet[3315]: E1105 15:04:37.898853 3315 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7ad347c7-b4f1-4be5-acfd-375260c5bc71-node-certs podName:7ad347c7-b4f1-4be5-acfd-375260c5bc71 nodeName:}" failed. No retries permitted until 2025-11-05 15:04:38.398820321 +0000 UTC m=+39.211340229 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/7ad347c7-b4f1-4be5-acfd-375260c5bc71-node-certs") pod "calico-node-thb6w" (UID: "7ad347c7-b4f1-4be5-acfd-375260c5bc71") : failed to sync secret cache: timed out waiting for the condition Nov 5 15:04:37.916803 kubelet[3315]: E1105 15:04:37.916713 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:37.916803 kubelet[3315]: W1105 15:04:37.916767 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:37.917035 kubelet[3315]: E1105 15:04:37.916811 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.017815 kubelet[3315]: E1105 15:04:38.017666 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.017815 kubelet[3315]: W1105 15:04:38.017700 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.017815 kubelet[3315]: E1105 15:04:38.017731 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.119550 kubelet[3315]: E1105 15:04:38.119507 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.119550 kubelet[3315]: W1105 15:04:38.119545 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.119828 kubelet[3315]: E1105 15:04:38.119576 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.220570 kubelet[3315]: E1105 15:04:38.220443 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.220570 kubelet[3315]: W1105 15:04:38.220476 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.220570 kubelet[3315]: E1105 15:04:38.220504 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.321325 kubelet[3315]: E1105 15:04:38.321291 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.321588 kubelet[3315]: W1105 15:04:38.321489 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.321588 kubelet[3315]: E1105 15:04:38.321525 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.422662 kubelet[3315]: E1105 15:04:38.422614 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.422662 kubelet[3315]: W1105 15:04:38.422650 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.423546 kubelet[3315]: E1105 15:04:38.422685 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.424010 kubelet[3315]: E1105 15:04:38.423982 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.424364 kubelet[3315]: W1105 15:04:38.424150 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.424364 kubelet[3315]: E1105 15:04:38.424189 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.424623 kubelet[3315]: E1105 15:04:38.424599 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.424726 kubelet[3315]: W1105 15:04:38.424703 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.424832 kubelet[3315]: E1105 15:04:38.424810 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.425555 kubelet[3315]: E1105 15:04:38.425342 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.425555 kubelet[3315]: W1105 15:04:38.425369 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.425555 kubelet[3315]: E1105 15:04:38.425393 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.425808 kubelet[3315]: E1105 15:04:38.425719 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.425808 kubelet[3315]: W1105 15:04:38.425740 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.425808 kubelet[3315]: E1105 15:04:38.425759 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.432825 kubelet[3315]: E1105 15:04:38.432779 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:38.432825 kubelet[3315]: W1105 15:04:38.432814 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:38.433133 kubelet[3315]: E1105 15:04:38.432843 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:38.486011 kubelet[3315]: E1105 15:04:38.485832 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:38.547723 containerd[1997]: time="2025-11-05T15:04:38.547662161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thb6w,Uid:7ad347c7-b4f1-4be5-acfd-375260c5bc71,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:38.593259 containerd[1997]: time="2025-11-05T15:04:38.593177213Z" level=info msg="connecting to shim 9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992" address="unix:///run/containerd/s/4f1fdc9e4bfca8499f605fb36a1da096476a537ec658ddf01158ee6019eb23fe" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:04:38.646178 systemd[1]: Started cri-containerd-9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992.scope - libcontainer container 9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992. Nov 5 15:04:38.707447 containerd[1997]: time="2025-11-05T15:04:38.707373605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-thb6w,Uid:7ad347c7-b4f1-4be5-acfd-375260c5bc71,Namespace:calico-system,Attempt:0,} returns sandbox id \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\"" Nov 5 15:04:39.128866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount857341577.mount: Deactivated successfully. Nov 5 15:04:40.090039 containerd[1997]: time="2025-11-05T15:04:40.089963020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:40.091735 containerd[1997]: time="2025-11-05T15:04:40.091451524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Nov 5 15:04:40.092631 containerd[1997]: time="2025-11-05T15:04:40.092577268Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:40.095917 containerd[1997]: time="2025-11-05T15:04:40.095843788Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:40.097134 containerd[1997]: time="2025-11-05T15:04:40.097091848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.90594789s" Nov 5 15:04:40.097384 containerd[1997]: time="2025-11-05T15:04:40.097249912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Nov 5 15:04:40.100290 containerd[1997]: time="2025-11-05T15:04:40.100228420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Nov 5 15:04:40.128326 containerd[1997]: time="2025-11-05T15:04:40.128263000Z" level=info msg="CreateContainer within sandbox \"78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 5 15:04:40.142800 containerd[1997]: time="2025-11-05T15:04:40.142734413Z" level=info msg="Container f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:40.151333 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1999958985.mount: Deactivated successfully. Nov 5 15:04:40.161840 containerd[1997]: time="2025-11-05T15:04:40.161762441Z" level=info msg="CreateContainer within sandbox \"78059641b320b86516a9b3e52b6a6372aad3b1ff0bd87e4f3110f3a498cfcd90\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e\"" Nov 5 15:04:40.164149 containerd[1997]: time="2025-11-05T15:04:40.164085533Z" level=info msg="StartContainer for \"f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e\"" Nov 5 15:04:40.168634 containerd[1997]: time="2025-11-05T15:04:40.168552497Z" level=info msg="connecting to shim f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e" address="unix:///run/containerd/s/ddf9003360401493667d761a88ecaa4b5eef0d24d19d131249ca1d1a5e1efa47" protocol=ttrpc version=3 Nov 5 15:04:40.208269 systemd[1]: Started cri-containerd-f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e.scope - libcontainer container f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e. Nov 5 15:04:40.295396 containerd[1997]: time="2025-11-05T15:04:40.295316813Z" level=info msg="StartContainer for \"f88bf456036c3b0dfe4052aba99b3f954b346123f69069e01a9f04f5ba13d00e\" returns successfully" Nov 5 15:04:40.486222 kubelet[3315]: E1105 15:04:40.485706 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:40.820008 kubelet[3315]: I1105 15:04:40.818555 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-794cff8fc8-z7q6b" podStartSLOduration=1.908614542 podStartE2EDuration="4.818534156s" podCreationTimestamp="2025-11-05 15:04:36 +0000 UTC" firstStartedPulling="2025-11-05 15:04:37.189092138 +0000 UTC m=+38.001612046" lastFinishedPulling="2025-11-05 15:04:40.099011764 +0000 UTC m=+40.911531660" observedRunningTime="2025-11-05 15:04:40.817717364 +0000 UTC m=+41.630237380" watchObservedRunningTime="2025-11-05 15:04:40.818534156 +0000 UTC m=+41.631054064" Nov 5 15:04:40.822507 kubelet[3315]: E1105 15:04:40.822135 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.822507 kubelet[3315]: W1105 15:04:40.822220 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.822507 kubelet[3315]: E1105 15:04:40.822257 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.823492 kubelet[3315]: E1105 15:04:40.823121 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.823783 kubelet[3315]: W1105 15:04:40.823611 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.824167 kubelet[3315]: E1105 15:04:40.824026 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.825769 kubelet[3315]: E1105 15:04:40.825494 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.825769 kubelet[3315]: W1105 15:04:40.825532 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.825769 kubelet[3315]: E1105 15:04:40.825566 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.827295 kubelet[3315]: E1105 15:04:40.827153 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.827295 kubelet[3315]: W1105 15:04:40.827224 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.827668 kubelet[3315]: E1105 15:04:40.827255 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.828860 kubelet[3315]: E1105 15:04:40.827947 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.829336 kubelet[3315]: W1105 15:04:40.829102 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.829336 kubelet[3315]: E1105 15:04:40.829148 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.830929 kubelet[3315]: E1105 15:04:40.829563 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.831119 kubelet[3315]: W1105 15:04:40.831069 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.831420 kubelet[3315]: E1105 15:04:40.831205 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.831632 kubelet[3315]: E1105 15:04:40.831611 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.831736 kubelet[3315]: W1105 15:04:40.831715 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.831854 kubelet[3315]: E1105 15:04:40.831831 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.832491 kubelet[3315]: E1105 15:04:40.832266 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.832491 kubelet[3315]: W1105 15:04:40.832303 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.832491 kubelet[3315]: E1105 15:04:40.832327 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.833940 kubelet[3315]: E1105 15:04:40.832787 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.834117 kubelet[3315]: W1105 15:04:40.834086 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.834247 kubelet[3315]: E1105 15:04:40.834223 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.834918 kubelet[3315]: E1105 15:04:40.834677 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.834918 kubelet[3315]: W1105 15:04:40.834958 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.834918 kubelet[3315]: E1105 15:04:40.834984 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.837063 kubelet[3315]: E1105 15:04:40.837031 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.837428 kubelet[3315]: W1105 15:04:40.837198 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.837428 kubelet[3315]: E1105 15:04:40.837233 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.837693 kubelet[3315]: E1105 15:04:40.837671 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.837787 kubelet[3315]: W1105 15:04:40.837766 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.838072 kubelet[3315]: E1105 15:04:40.837873 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.838263 kubelet[3315]: E1105 15:04:40.838243 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.838363 kubelet[3315]: W1105 15:04:40.838343 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.838476 kubelet[3315]: E1105 15:04:40.838455 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.838944 kubelet[3315]: E1105 15:04:40.838818 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.838944 kubelet[3315]: W1105 15:04:40.838839 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.838944 kubelet[3315]: E1105 15:04:40.838859 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.841062 kubelet[3315]: E1105 15:04:40.841027 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.841551 kubelet[3315]: W1105 15:04:40.841226 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.841551 kubelet[3315]: E1105 15:04:40.841265 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.842201 kubelet[3315]: E1105 15:04:40.842173 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.842333 kubelet[3315]: W1105 15:04:40.842308 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.842465 kubelet[3315]: E1105 15:04:40.842418 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.843036 kubelet[3315]: E1105 15:04:40.843003 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.843313 kubelet[3315]: W1105 15:04:40.843250 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.843313 kubelet[3315]: E1105 15:04:40.843287 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.844015 kubelet[3315]: E1105 15:04:40.843934 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.844015 kubelet[3315]: W1105 15:04:40.843961 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.844015 kubelet[3315]: E1105 15:04:40.843986 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.844662 kubelet[3315]: E1105 15:04:40.844597 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.844662 kubelet[3315]: W1105 15:04:40.844619 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.844662 kubelet[3315]: E1105 15:04:40.844638 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.845374 kubelet[3315]: E1105 15:04:40.845310 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.845374 kubelet[3315]: W1105 15:04:40.845331 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.845374 kubelet[3315]: E1105 15:04:40.845352 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.846173 kubelet[3315]: E1105 15:04:40.846103 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.846173 kubelet[3315]: W1105 15:04:40.846126 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.846173 kubelet[3315]: E1105 15:04:40.846149 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.846935 kubelet[3315]: E1105 15:04:40.846818 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.846935 kubelet[3315]: W1105 15:04:40.846842 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.846935 kubelet[3315]: E1105 15:04:40.846864 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.849197 kubelet[3315]: E1105 15:04:40.849146 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.851469 kubelet[3315]: W1105 15:04:40.851293 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.851469 kubelet[3315]: E1105 15:04:40.851343 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.852833 kubelet[3315]: E1105 15:04:40.852732 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.852833 kubelet[3315]: W1105 15:04:40.852768 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.852833 kubelet[3315]: E1105 15:04:40.852800 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.853671 kubelet[3315]: E1105 15:04:40.853591 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.853671 kubelet[3315]: W1105 15:04:40.853619 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.853671 kubelet[3315]: E1105 15:04:40.853644 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.855921 kubelet[3315]: E1105 15:04:40.855763 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.855921 kubelet[3315]: W1105 15:04:40.855828 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.855921 kubelet[3315]: E1105 15:04:40.855860 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.856612 kubelet[3315]: E1105 15:04:40.856579 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.856840 kubelet[3315]: W1105 15:04:40.856760 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.856840 kubelet[3315]: E1105 15:04:40.856799 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.857982 kubelet[3315]: E1105 15:04:40.857436 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.857982 kubelet[3315]: W1105 15:04:40.857464 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.857982 kubelet[3315]: E1105 15:04:40.857491 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.858788 kubelet[3315]: E1105 15:04:40.858694 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.858788 kubelet[3315]: W1105 15:04:40.858727 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.858788 kubelet[3315]: E1105 15:04:40.858757 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.860177 kubelet[3315]: E1105 15:04:40.860144 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.860652 kubelet[3315]: W1105 15:04:40.860628 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.860921 kubelet[3315]: E1105 15:04:40.860745 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.861616 kubelet[3315]: E1105 15:04:40.861589 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.862047 kubelet[3315]: W1105 15:04:40.861936 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.862047 kubelet[3315]: E1105 15:04:40.861974 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.863767 kubelet[3315]: E1105 15:04:40.863733 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.864227 kubelet[3315]: W1105 15:04:40.864036 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.864227 kubelet[3315]: E1105 15:04:40.864075 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:40.866073 kubelet[3315]: E1105 15:04:40.865645 3315 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 5 15:04:40.866073 kubelet[3315]: W1105 15:04:40.865965 3315 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 5 15:04:40.866073 kubelet[3315]: E1105 15:04:40.865998 3315 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 5 15:04:41.428997 containerd[1997]: time="2025-11-05T15:04:41.428883295Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:41.431225 containerd[1997]: time="2025-11-05T15:04:41.430831495Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Nov 5 15:04:41.433297 containerd[1997]: time="2025-11-05T15:04:41.433235047Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:41.437812 containerd[1997]: time="2025-11-05T15:04:41.437762899Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:41.439005 containerd[1997]: time="2025-11-05T15:04:41.438943327Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.338654007s" Nov 5 15:04:41.439154 containerd[1997]: time="2025-11-05T15:04:41.439003867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Nov 5 15:04:41.448457 containerd[1997]: time="2025-11-05T15:04:41.448387795Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 5 15:04:41.468789 containerd[1997]: time="2025-11-05T15:04:41.468733675Z" level=info msg="Container ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:41.489425 containerd[1997]: time="2025-11-05T15:04:41.489326443Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\"" Nov 5 15:04:41.491919 containerd[1997]: time="2025-11-05T15:04:41.491616235Z" level=info msg="StartContainer for \"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\"" Nov 5 15:04:41.495524 containerd[1997]: time="2025-11-05T15:04:41.495468235Z" level=info msg="connecting to shim ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13" address="unix:///run/containerd/s/4f1fdc9e4bfca8499f605fb36a1da096476a537ec658ddf01158ee6019eb23fe" protocol=ttrpc version=3 Nov 5 15:04:41.537238 systemd[1]: Started cri-containerd-ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13.scope - libcontainer container ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13. Nov 5 15:04:41.617738 containerd[1997]: time="2025-11-05T15:04:41.617676680Z" level=info msg="StartContainer for \"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\" returns successfully" Nov 5 15:04:41.649166 systemd[1]: cri-containerd-ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13.scope: Deactivated successfully. Nov 5 15:04:41.656137 containerd[1997]: time="2025-11-05T15:04:41.656076524Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\" id:\"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\" pid:4141 exited_at:{seconds:1762355081 nanos:653453276}" Nov 5 15:04:41.656318 containerd[1997]: time="2025-11-05T15:04:41.656161124Z" level=info msg="received exit event container_id:\"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\" id:\"ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13\" pid:4141 exited_at:{seconds:1762355081 nanos:653453276}" Nov 5 15:04:41.702082 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca440dbe421bf9373d300fd34c1cbd3384434de5273c7d61d2b1484ad061bd13-rootfs.mount: Deactivated successfully. Nov 5 15:04:42.485618 kubelet[3315]: E1105 15:04:42.485526 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:42.806303 containerd[1997]: time="2025-11-05T15:04:42.805239298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Nov 5 15:04:44.485526 kubelet[3315]: E1105 15:04:44.485462 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:46.486957 kubelet[3315]: E1105 15:04:46.486500 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:47.355931 containerd[1997]: time="2025-11-05T15:04:47.355157424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:47.357119 containerd[1997]: time="2025-11-05T15:04:47.357080208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Nov 5 15:04:47.359000 containerd[1997]: time="2025-11-05T15:04:47.358925712Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:47.362953 containerd[1997]: time="2025-11-05T15:04:47.362475804Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:47.363882 containerd[1997]: time="2025-11-05T15:04:47.363839220Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 4.558537462s" Nov 5 15:04:47.364082 containerd[1997]: time="2025-11-05T15:04:47.364039476Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Nov 5 15:04:47.370849 containerd[1997]: time="2025-11-05T15:04:47.370785972Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 5 15:04:47.387229 containerd[1997]: time="2025-11-05T15:04:47.387171996Z" level=info msg="Container 7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:47.406566 containerd[1997]: time="2025-11-05T15:04:47.406466245Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\"" Nov 5 15:04:47.408186 containerd[1997]: time="2025-11-05T15:04:47.408114757Z" level=info msg="StartContainer for \"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\"" Nov 5 15:04:47.414107 containerd[1997]: time="2025-11-05T15:04:47.414035869Z" level=info msg="connecting to shim 7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff" address="unix:///run/containerd/s/4f1fdc9e4bfca8499f605fb36a1da096476a537ec658ddf01158ee6019eb23fe" protocol=ttrpc version=3 Nov 5 15:04:47.466258 systemd[1]: Started cri-containerd-7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff.scope - libcontainer container 7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff. Nov 5 15:04:47.560440 containerd[1997]: time="2025-11-05T15:04:47.560319553Z" level=info msg="StartContainer for \"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\" returns successfully" Nov 5 15:04:48.487121 kubelet[3315]: E1105 15:04:48.486998 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:48.574180 containerd[1997]: time="2025-11-05T15:04:48.574113350Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:04:48.578736 systemd[1]: cri-containerd-7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff.scope: Deactivated successfully. Nov 5 15:04:48.579842 systemd[1]: cri-containerd-7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff.scope: Consumed 923ms CPU time, 188.1M memory peak, 165.9M written to disk. Nov 5 15:04:48.584653 containerd[1997]: time="2025-11-05T15:04:48.584443202Z" level=info msg="received exit event container_id:\"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\" id:\"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\" pid:4197 exited_at:{seconds:1762355088 nanos:584119274}" Nov 5 15:04:48.585467 containerd[1997]: time="2025-11-05T15:04:48.585402890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\" id:\"7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff\" pid:4197 exited_at:{seconds:1762355088 nanos:584119274}" Nov 5 15:04:48.628027 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a605788b103451dd0f2bb4ba8dcf5573e258e7602dc7b97be85cbdf4d7562ff-rootfs.mount: Deactivated successfully. Nov 5 15:04:48.670940 kubelet[3315]: I1105 15:04:48.670869 3315 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 15:04:48.808410 kubelet[3315]: I1105 15:04:48.808082 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d480047e-a8f5-4d50-b3b1-cda61de6f2e4-config-volume\") pod \"coredns-674b8bbfcf-nt62s\" (UID: \"d480047e-a8f5-4d50-b3b1-cda61de6f2e4\") " pod="kube-system/coredns-674b8bbfcf-nt62s" Nov 5 15:04:48.808410 kubelet[3315]: I1105 15:04:48.808180 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-db94p\" (UniqueName: \"kubernetes.io/projected/d480047e-a8f5-4d50-b3b1-cda61de6f2e4-kube-api-access-db94p\") pod \"coredns-674b8bbfcf-nt62s\" (UID: \"d480047e-a8f5-4d50-b3b1-cda61de6f2e4\") " pod="kube-system/coredns-674b8bbfcf-nt62s" Nov 5 15:04:48.815473 systemd[1]: Created slice kubepods-burstable-podd480047e_a8f5_4d50_b3b1_cda61de6f2e4.slice - libcontainer container kubepods-burstable-podd480047e_a8f5_4d50_b3b1_cda61de6f2e4.slice. Nov 5 15:04:48.908929 kubelet[3315]: I1105 15:04:48.908776 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvsxn\" (UniqueName: \"kubernetes.io/projected/9e79ac53-08d7-4495-96f5-177d69064854-kube-api-access-nvsxn\") pod \"coredns-674b8bbfcf-rvxhd\" (UID: \"9e79ac53-08d7-4495-96f5-177d69064854\") " pod="kube-system/coredns-674b8bbfcf-rvxhd" Nov 5 15:04:48.909473 kubelet[3315]: I1105 15:04:48.909411 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9e79ac53-08d7-4495-96f5-177d69064854-config-volume\") pod \"coredns-674b8bbfcf-rvxhd\" (UID: \"9e79ac53-08d7-4495-96f5-177d69064854\") " pod="kube-system/coredns-674b8bbfcf-rvxhd" Nov 5 15:04:48.972577 systemd[1]: Created slice kubepods-burstable-pod9e79ac53_08d7_4495_96f5_177d69064854.slice - libcontainer container kubepods-burstable-pod9e79ac53_08d7_4495_96f5_177d69064854.slice. Nov 5 15:04:49.010648 kubelet[3315]: I1105 15:04:49.010588 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7czzv\" (UniqueName: \"kubernetes.io/projected/8a16765c-7214-405b-a3ab-1a750d3fae14-kube-api-access-7czzv\") pod \"calico-apiserver-9456ddf4d-hxsgd\" (UID: \"8a16765c-7214-405b-a3ab-1a750d3fae14\") " pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" Nov 5 15:04:49.011244 kubelet[3315]: I1105 15:04:49.010701 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8a16765c-7214-405b-a3ab-1a750d3fae14-calico-apiserver-certs\") pod \"calico-apiserver-9456ddf4d-hxsgd\" (UID: \"8a16765c-7214-405b-a3ab-1a750d3fae14\") " pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" Nov 5 15:04:49.065212 systemd[1]: Created slice kubepods-besteffort-pod88b7c103_d45c_4fa8_81a5_56483036338a.slice - libcontainer container kubepods-besteffort-pod88b7c103_d45c_4fa8_81a5_56483036338a.slice. Nov 5 15:04:49.081945 systemd[1]: Created slice kubepods-besteffort-pod8a16765c_7214_405b_a3ab_1a750d3fae14.slice - libcontainer container kubepods-besteffort-pod8a16765c_7214_405b_a3ab_1a750d3fae14.slice. Nov 5 15:04:49.112108 kubelet[3315]: I1105 15:04:49.111870 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/bd295bb4-9ab3-4f09-8d18-d7e16c0d217c-calico-apiserver-certs\") pod \"calico-apiserver-9456ddf4d-qdk95\" (UID: \"bd295bb4-9ab3-4f09-8d18-d7e16c0d217c\") " pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" Nov 5 15:04:49.113963 kubelet[3315]: I1105 15:04:49.113347 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88b7c103-d45c-4fa8-81a5-56483036338a-tigera-ca-bundle\") pod \"calico-kube-controllers-78f86b5b57-hhskm\" (UID: \"88b7c103-d45c-4fa8-81a5-56483036338a\") " pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" Nov 5 15:04:49.114287 kubelet[3315]: I1105 15:04:49.114254 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cr7wf\" (UniqueName: \"kubernetes.io/projected/bd295bb4-9ab3-4f09-8d18-d7e16c0d217c-kube-api-access-cr7wf\") pod \"calico-apiserver-9456ddf4d-qdk95\" (UID: \"bd295bb4-9ab3-4f09-8d18-d7e16c0d217c\") " pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" Nov 5 15:04:49.114449 kubelet[3315]: I1105 15:04:49.114424 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dnpk\" (UniqueName: \"kubernetes.io/projected/88b7c103-d45c-4fa8-81a5-56483036338a-kube-api-access-6dnpk\") pod \"calico-kube-controllers-78f86b5b57-hhskm\" (UID: \"88b7c103-d45c-4fa8-81a5-56483036338a\") " pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" Nov 5 15:04:49.117176 systemd[1]: Created slice kubepods-besteffort-podbd295bb4_9ab3_4f09_8d18_d7e16c0d217c.slice - libcontainer container kubepods-besteffort-podbd295bb4_9ab3_4f09_8d18_d7e16c0d217c.slice. Nov 5 15:04:49.128825 containerd[1997]: time="2025-11-05T15:04:49.128775757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt62s,Uid:d480047e-a8f5-4d50-b3b1-cda61de6f2e4,Namespace:kube-system,Attempt:0,}" Nov 5 15:04:49.223820 kubelet[3315]: I1105 15:04:49.219365 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7-goldmane-key-pair\") pod \"goldmane-666569f655-q7wlk\" (UID: \"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7\") " pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.223820 kubelet[3315]: I1105 15:04:49.221811 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnp2k\" (UniqueName: \"kubernetes.io/projected/ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7-kube-api-access-qnp2k\") pod \"goldmane-666569f655-q7wlk\" (UID: \"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7\") " pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.223820 kubelet[3315]: I1105 15:04:49.221923 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/8b23d5a1-7fb9-4412-bcea-afb711fedf9c-calico-apiserver-certs\") pod \"calico-apiserver-84dbb9fd44-dctgw\" (UID: \"8b23d5a1-7fb9-4412-bcea-afb711fedf9c\") " pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" Nov 5 15:04:49.223820 kubelet[3315]: I1105 15:04:49.221969 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49sgl\" (UniqueName: \"kubernetes.io/projected/8b23d5a1-7fb9-4412-bcea-afb711fedf9c-kube-api-access-49sgl\") pod \"calico-apiserver-84dbb9fd44-dctgw\" (UID: \"8b23d5a1-7fb9-4412-bcea-afb711fedf9c\") " pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" Nov 5 15:04:49.223820 kubelet[3315]: I1105 15:04:49.222070 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7-config\") pod \"goldmane-666569f655-q7wlk\" (UID: \"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7\") " pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.223481 systemd[1]: Created slice kubepods-besteffort-pod8b23d5a1_7fb9_4412_bcea_afb711fedf9c.slice - libcontainer container kubepods-besteffort-pod8b23d5a1_7fb9_4412_bcea_afb711fedf9c.slice. Nov 5 15:04:49.224336 kubelet[3315]: I1105 15:04:49.222142 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7-goldmane-ca-bundle\") pod \"goldmane-666569f655-q7wlk\" (UID: \"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7\") " pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.268649 systemd[1]: Created slice kubepods-besteffort-podceff6c23_cc8d_4d0d_a96c_00e2c04e9ec7.slice - libcontainer container kubepods-besteffort-podceff6c23_cc8d_4d0d_a96c_00e2c04e9ec7.slice. Nov 5 15:04:49.293668 containerd[1997]: time="2025-11-05T15:04:49.293616098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvxhd,Uid:9e79ac53-08d7-4495-96f5-177d69064854,Namespace:kube-system,Attempt:0,}" Nov 5 15:04:49.309858 systemd[1]: Created slice kubepods-besteffort-podaa868d35_14a6_4a98_9b28_618dfad40231.slice - libcontainer container kubepods-besteffort-podaa868d35_14a6_4a98_9b28_618dfad40231.slice. Nov 5 15:04:49.322644 kubelet[3315]: I1105 15:04:49.322505 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-backend-key-pair\") pod \"whisker-6bd5764d8d-cfqmz\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " pod="calico-system/whisker-6bd5764d8d-cfqmz" Nov 5 15:04:49.324833 kubelet[3315]: I1105 15:04:49.324787 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-ca-bundle\") pod \"whisker-6bd5764d8d-cfqmz\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " pod="calico-system/whisker-6bd5764d8d-cfqmz" Nov 5 15:04:49.331921 kubelet[3315]: I1105 15:04:49.331329 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jbgh\" (UniqueName: \"kubernetes.io/projected/aa868d35-14a6-4a98-9b28-618dfad40231-kube-api-access-5jbgh\") pod \"whisker-6bd5764d8d-cfqmz\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " pod="calico-system/whisker-6bd5764d8d-cfqmz" Nov 5 15:04:49.397479 containerd[1997]: time="2025-11-05T15:04:49.397426334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-hxsgd,Uid:8a16765c-7214-405b-a3ab-1a750d3fae14,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:04:49.399675 containerd[1997]: time="2025-11-05T15:04:49.399566366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f86b5b57-hhskm,Uid:88b7c103-d45c-4fa8-81a5-56483036338a,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:49.430779 containerd[1997]: time="2025-11-05T15:04:49.430522227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-qdk95,Uid:bd295bb4-9ab3-4f09-8d18-d7e16c0d217c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:04:49.565071 containerd[1997]: time="2025-11-05T15:04:49.564987879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dbb9fd44-dctgw,Uid:8b23d5a1-7fb9-4412-bcea-afb711fedf9c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:04:49.586244 containerd[1997]: time="2025-11-05T15:04:49.586084635Z" level=error msg="Failed to destroy network for sandbox \"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.614334 containerd[1997]: time="2025-11-05T15:04:49.614242852Z" level=error msg="Failed to destroy network for sandbox \"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.616593 containerd[1997]: time="2025-11-05T15:04:49.616050448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q7wlk,Uid:ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:49.621461 containerd[1997]: time="2025-11-05T15:04:49.621375640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt62s,Uid:d480047e-a8f5-4d50-b3b1-cda61de6f2e4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.626464 kubelet[3315]: E1105 15:04:49.625766 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.626464 kubelet[3315]: E1105 15:04:49.625864 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nt62s" Nov 5 15:04:49.626464 kubelet[3315]: E1105 15:04:49.625922 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-nt62s" Nov 5 15:04:49.628963 containerd[1997]: time="2025-11-05T15:04:49.625137232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-hxsgd,Uid:8a16765c-7214-405b-a3ab-1a750d3fae14,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.629124 kubelet[3315]: E1105 15:04:49.626016 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-nt62s_kube-system(d480047e-a8f5-4d50-b3b1-cda61de6f2e4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-nt62s_kube-system(d480047e-a8f5-4d50-b3b1-cda61de6f2e4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e58c2f5f056b7c45a8c596fbab7ed1f042b5c1592cfe7945370402307c5788bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-nt62s" podUID="d480047e-a8f5-4d50-b3b1-cda61de6f2e4" Nov 5 15:04:49.630069 kubelet[3315]: E1105 15:04:49.629822 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.630528 kubelet[3315]: E1105 15:04:49.629940 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" Nov 5 15:04:49.630528 kubelet[3315]: E1105 15:04:49.630450 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" Nov 5 15:04:49.633303 kubelet[3315]: E1105 15:04:49.633082 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b142bbf1fad6801fb14bda525df90b1a71fd2c9979e09cd0bb171c81840ef2af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:04:49.633520 containerd[1997]: time="2025-11-05T15:04:49.632784244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd5764d8d-cfqmz,Uid:aa868d35-14a6-4a98-9b28-618dfad40231,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:49.725602 containerd[1997]: time="2025-11-05T15:04:49.725323216Z" level=error msg="Failed to destroy network for sandbox \"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.730676 systemd[1]: run-netns-cni\x2d868cebb0\x2de9d2\x2d049a\x2d65b4\x2d1606b6169592.mount: Deactivated successfully. Nov 5 15:04:49.752825 containerd[1997]: time="2025-11-05T15:04:49.752639872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvxhd,Uid:9e79ac53-08d7-4495-96f5-177d69064854,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.754688 kubelet[3315]: E1105 15:04:49.754032 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.754688 kubelet[3315]: E1105 15:04:49.754131 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rvxhd" Nov 5 15:04:49.754688 kubelet[3315]: E1105 15:04:49.754171 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-rvxhd" Nov 5 15:04:49.755107 kubelet[3315]: E1105 15:04:49.754256 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-rvxhd_kube-system(9e79ac53-08d7-4495-96f5-177d69064854)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-rvxhd_kube-system(9e79ac53-08d7-4495-96f5-177d69064854)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ab3b1518de1a4e8ba0d8a83d716ee9c82921c0b3f71c7a9247c764e4efb5b168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-rvxhd" podUID="9e79ac53-08d7-4495-96f5-177d69064854" Nov 5 15:04:49.803299 containerd[1997]: time="2025-11-05T15:04:49.803184112Z" level=error msg="Failed to destroy network for sandbox \"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.808689 systemd[1]: run-netns-cni\x2d2d80793d\x2d6857\x2dc1be\x2ddea0\x2d376d8a89f746.mount: Deactivated successfully. Nov 5 15:04:49.810995 containerd[1997]: time="2025-11-05T15:04:49.810913049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-qdk95,Uid:bd295bb4-9ab3-4f09-8d18-d7e16c0d217c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.813595 kubelet[3315]: E1105 15:04:49.812193 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.813595 kubelet[3315]: E1105 15:04:49.812281 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" Nov 5 15:04:49.813595 kubelet[3315]: E1105 15:04:49.812318 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" Nov 5 15:04:49.814554 kubelet[3315]: E1105 15:04:49.812404 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77c57123042b4df985c86a019b12bc00ac3c611981ff51c05750b3b8a8fd1ea0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:04:49.866271 containerd[1997]: time="2025-11-05T15:04:49.866108957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Nov 5 15:04:49.904129 containerd[1997]: time="2025-11-05T15:04:49.904052549Z" level=error msg="Failed to destroy network for sandbox \"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.910660 systemd[1]: run-netns-cni\x2d828f3afa\x2dd38a\x2d48f1\x2d8df6\x2d03ac0f673ce5.mount: Deactivated successfully. Nov 5 15:04:49.912972 containerd[1997]: time="2025-11-05T15:04:49.910858985Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f86b5b57-hhskm,Uid:88b7c103-d45c-4fa8-81a5-56483036338a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.914160 kubelet[3315]: E1105 15:04:49.914109 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.914702 kubelet[3315]: E1105 15:04:49.914662 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" Nov 5 15:04:49.914882 kubelet[3315]: E1105 15:04:49.914847 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" Nov 5 15:04:49.915979 kubelet[3315]: E1105 15:04:49.915874 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"087886b60d0d033559cf4d664e3a1de6a34a64f7ac5b8448f8ee56eadc11c528\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:04:49.976180 containerd[1997]: time="2025-11-05T15:04:49.976104053Z" level=error msg="Failed to destroy network for sandbox \"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.979217 containerd[1997]: time="2025-11-05T15:04:49.979137497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q7wlk,Uid:ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.979931 kubelet[3315]: E1105 15:04:49.979810 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:49.980955 kubelet[3315]: E1105 15:04:49.980112 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.980955 kubelet[3315]: E1105 15:04:49.980152 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q7wlk" Nov 5 15:04:49.980955 kubelet[3315]: E1105 15:04:49.980243 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"146e1e748cc69e6d8f744640ee0eccdb337042208b313e3cbfa2e6ce93322687\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:04:50.003356 containerd[1997]: time="2025-11-05T15:04:50.003237481Z" level=error msg="Failed to destroy network for sandbox \"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.005836 containerd[1997]: time="2025-11-05T15:04:50.005262553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bd5764d8d-cfqmz,Uid:aa868d35-14a6-4a98-9b28-618dfad40231,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.007866 kubelet[3315]: E1105 15:04:50.007389 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.007866 kubelet[3315]: E1105 15:04:50.007480 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd5764d8d-cfqmz" Nov 5 15:04:50.007866 kubelet[3315]: E1105 15:04:50.007517 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bd5764d8d-cfqmz" Nov 5 15:04:50.008812 kubelet[3315]: E1105 15:04:50.007601 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bd5764d8d-cfqmz_calico-system(aa868d35-14a6-4a98-9b28-618dfad40231)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bd5764d8d-cfqmz_calico-system(aa868d35-14a6-4a98-9b28-618dfad40231)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f84d646aa4c1fe38bbb5d7b01489c4b6f19ccb6bef0d1e62baab0a02cb73ff8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bd5764d8d-cfqmz" podUID="aa868d35-14a6-4a98-9b28-618dfad40231" Nov 5 15:04:50.018543 containerd[1997]: time="2025-11-05T15:04:50.018289646Z" level=error msg="Failed to destroy network for sandbox \"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.021209 containerd[1997]: time="2025-11-05T15:04:50.021135842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dbb9fd44-dctgw,Uid:8b23d5a1-7fb9-4412-bcea-afb711fedf9c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.023205 kubelet[3315]: E1105 15:04:50.023160 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.023427 kubelet[3315]: E1105 15:04:50.023393 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" Nov 5 15:04:50.023542 kubelet[3315]: E1105 15:04:50.023513 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" Nov 5 15:04:50.023730 kubelet[3315]: E1105 15:04:50.023689 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ec305cf05c8834b52100deedcfafbd58e23b98bb3e437e20d23291fbdf52e053\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:04:50.499294 systemd[1]: Created slice kubepods-besteffort-pod80c765f3_c6de_4dd4_a2b4_f4fc2fe8a572.slice - libcontainer container kubepods-besteffort-pod80c765f3_c6de_4dd4_a2b4_f4fc2fe8a572.slice. Nov 5 15:04:50.504580 containerd[1997]: time="2025-11-05T15:04:50.504500944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbscs,Uid:80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:50.595656 containerd[1997]: time="2025-11-05T15:04:50.595571140Z" level=error msg="Failed to destroy network for sandbox \"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.597248 containerd[1997]: time="2025-11-05T15:04:50.597144160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbscs,Uid:80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.597545 kubelet[3315]: E1105 15:04:50.597494 3315 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 5 15:04:50.597683 kubelet[3315]: E1105 15:04:50.597575 3315 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:50.597683 kubelet[3315]: E1105 15:04:50.597611 3315 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbscs" Nov 5 15:04:50.597864 kubelet[3315]: E1105 15:04:50.597692 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"75524125072cd821d54823028a0ddf3bb7d939e7dac3ba2fa3dbedee531e394b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:04:50.625421 systemd[1]: run-netns-cni\x2dccf86913\x2d212a\x2d6994\x2d92c9\x2d8862ebc6af8b.mount: Deactivated successfully. Nov 5 15:04:50.625630 systemd[1]: run-netns-cni\x2d8df287be\x2da463\x2d4f89\x2d8ed6\x2d05b571728210.mount: Deactivated successfully. Nov 5 15:04:50.625760 systemd[1]: run-netns-cni\x2de92e514e\x2de839\x2dcd0d\x2d6a69\x2d2240eafa63b7.mount: Deactivated successfully. Nov 5 15:04:56.722943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1758201303.mount: Deactivated successfully. Nov 5 15:04:56.781192 containerd[1997]: time="2025-11-05T15:04:56.781127087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:56.783188 containerd[1997]: time="2025-11-05T15:04:56.783135023Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Nov 5 15:04:56.783997 containerd[1997]: time="2025-11-05T15:04:56.783947567Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:56.787719 containerd[1997]: time="2025-11-05T15:04:56.787641479Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 15:04:56.790160 containerd[1997]: time="2025-11-05T15:04:56.790091363Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.923903746s" Nov 5 15:04:56.790160 containerd[1997]: time="2025-11-05T15:04:56.790149851Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Nov 5 15:04:56.818701 containerd[1997]: time="2025-11-05T15:04:56.818640227Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 5 15:04:56.836608 containerd[1997]: time="2025-11-05T15:04:56.836553155Z" level=info msg="Container 016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:04:56.846914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3044708482.mount: Deactivated successfully. Nov 5 15:04:56.865909 containerd[1997]: time="2025-11-05T15:04:56.865265616Z" level=info msg="CreateContainer within sandbox \"9d976a1a8f21c4a0741fb9b2bf53b4928d70cdf5700da4394582241f25384992\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\"" Nov 5 15:04:56.868540 containerd[1997]: time="2025-11-05T15:04:56.868157988Z" level=info msg="StartContainer for \"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\"" Nov 5 15:04:56.875633 containerd[1997]: time="2025-11-05T15:04:56.875574900Z" level=info msg="connecting to shim 016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53" address="unix:///run/containerd/s/4f1fdc9e4bfca8499f605fb36a1da096476a537ec658ddf01158ee6019eb23fe" protocol=ttrpc version=3 Nov 5 15:04:56.921598 systemd[1]: Started cri-containerd-016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53.scope - libcontainer container 016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53. Nov 5 15:04:57.014385 containerd[1997]: time="2025-11-05T15:04:57.013320764Z" level=info msg="StartContainer for \"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" returns successfully" Nov 5 15:04:57.381364 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 5 15:04:57.381534 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 5 15:04:57.710722 kubelet[3315]: I1105 15:04:57.710662 3315 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-backend-key-pair\") pod \"aa868d35-14a6-4a98-9b28-618dfad40231\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " Nov 5 15:04:57.712997 kubelet[3315]: I1105 15:04:57.711739 3315 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-ca-bundle\") pod \"aa868d35-14a6-4a98-9b28-618dfad40231\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " Nov 5 15:04:57.712997 kubelet[3315]: I1105 15:04:57.711805 3315 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jbgh\" (UniqueName: \"kubernetes.io/projected/aa868d35-14a6-4a98-9b28-618dfad40231-kube-api-access-5jbgh\") pod \"aa868d35-14a6-4a98-9b28-618dfad40231\" (UID: \"aa868d35-14a6-4a98-9b28-618dfad40231\") " Nov 5 15:04:57.714110 kubelet[3315]: I1105 15:04:57.714039 3315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "aa868d35-14a6-4a98-9b28-618dfad40231" (UID: "aa868d35-14a6-4a98-9b28-618dfad40231"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:04:57.729982 kubelet[3315]: I1105 15:04:57.727218 3315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "aa868d35-14a6-4a98-9b28-618dfad40231" (UID: "aa868d35-14a6-4a98-9b28-618dfad40231"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:04:57.729982 kubelet[3315]: I1105 15:04:57.727455 3315 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa868d35-14a6-4a98-9b28-618dfad40231-kube-api-access-5jbgh" (OuterVolumeSpecName: "kube-api-access-5jbgh") pod "aa868d35-14a6-4a98-9b28-618dfad40231" (UID: "aa868d35-14a6-4a98-9b28-618dfad40231"). InnerVolumeSpecName "kube-api-access-5jbgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:04:57.731186 systemd[1]: var-lib-kubelet-pods-aa868d35\x2d14a6\x2d4a98\x2d9b28\x2d618dfad40231-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5jbgh.mount: Deactivated successfully. Nov 5 15:04:57.731370 systemd[1]: var-lib-kubelet-pods-aa868d35\x2d14a6\x2d4a98\x2d9b28\x2d618dfad40231-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Nov 5 15:04:57.813225 kubelet[3315]: I1105 15:04:57.813164 3315 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-backend-key-pair\") on node \"ip-172-31-23-78\" DevicePath \"\"" Nov 5 15:04:57.813225 kubelet[3315]: I1105 15:04:57.813220 3315 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa868d35-14a6-4a98-9b28-618dfad40231-whisker-ca-bundle\") on node \"ip-172-31-23-78\" DevicePath \"\"" Nov 5 15:04:57.813441 kubelet[3315]: I1105 15:04:57.813248 3315 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5jbgh\" (UniqueName: \"kubernetes.io/projected/aa868d35-14a6-4a98-9b28-618dfad40231-kube-api-access-5jbgh\") on node \"ip-172-31-23-78\" DevicePath \"\"" Nov 5 15:04:57.959793 systemd[1]: Removed slice kubepods-besteffort-podaa868d35_14a6_4a98_9b28_618dfad40231.slice - libcontainer container kubepods-besteffort-podaa868d35_14a6_4a98_9b28_618dfad40231.slice. Nov 5 15:04:58.005467 kubelet[3315]: I1105 15:04:58.003743 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-thb6w" podStartSLOduration=3.9221410150000002 podStartE2EDuration="22.003712629s" podCreationTimestamp="2025-11-05 15:04:36 +0000 UTC" firstStartedPulling="2025-11-05 15:04:38.709728725 +0000 UTC m=+39.522248633" lastFinishedPulling="2025-11-05 15:04:56.791300339 +0000 UTC m=+57.603820247" observedRunningTime="2025-11-05 15:04:57.972834205 +0000 UTC m=+58.785354137" watchObservedRunningTime="2025-11-05 15:04:58.003712629 +0000 UTC m=+58.816232537" Nov 5 15:04:58.119781 systemd[1]: Created slice kubepods-besteffort-pod7c3e0183_e5b9_4364_be32_8caba037f1e7.slice - libcontainer container kubepods-besteffort-pod7c3e0183_e5b9_4364_be32_8caba037f1e7.slice. Nov 5 15:04:58.217496 kubelet[3315]: I1105 15:04:58.217401 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7c3e0183-e5b9-4364-be32-8caba037f1e7-whisker-ca-bundle\") pod \"whisker-79d458847d-vcdwj\" (UID: \"7c3e0183-e5b9-4364-be32-8caba037f1e7\") " pod="calico-system/whisker-79d458847d-vcdwj" Nov 5 15:04:58.217496 kubelet[3315]: I1105 15:04:58.217487 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/7c3e0183-e5b9-4364-be32-8caba037f1e7-whisker-backend-key-pair\") pod \"whisker-79d458847d-vcdwj\" (UID: \"7c3e0183-e5b9-4364-be32-8caba037f1e7\") " pod="calico-system/whisker-79d458847d-vcdwj" Nov 5 15:04:58.217727 kubelet[3315]: I1105 15:04:58.217551 3315 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c8rkz\" (UniqueName: \"kubernetes.io/projected/7c3e0183-e5b9-4364-be32-8caba037f1e7-kube-api-access-c8rkz\") pod \"whisker-79d458847d-vcdwj\" (UID: \"7c3e0183-e5b9-4364-be32-8caba037f1e7\") " pod="calico-system/whisker-79d458847d-vcdwj" Nov 5 15:04:58.428146 containerd[1997]: time="2025-11-05T15:04:58.428088059Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d458847d-vcdwj,Uid:7c3e0183-e5b9-4364-be32-8caba037f1e7,Namespace:calico-system,Attempt:0,}" Nov 5 15:04:58.605613 containerd[1997]: time="2025-11-05T15:04:58.605535084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" id:\"d0fbbb14a8226c9b3aa8d55230272f54b9264ce17af6274bcba69fa8b15ff5ec\" pid:4544 exit_status:1 exited_at:{seconds:1762355098 nanos:603741144}" Nov 5 15:04:59.076416 containerd[1997]: time="2025-11-05T15:04:59.076210235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" id:\"7cc4ce73ffb8d9a99858a2f857b89e8afd16f101f419282d1f9b51249a4867c5\" pid:4590 exit_status:1 exited_at:{seconds:1762355099 nanos:75506999}" Nov 5 15:04:59.492800 kubelet[3315]: I1105 15:04:59.492733 3315 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa868d35-14a6-4a98-9b28-618dfad40231" path="/var/lib/kubelet/pods/aa868d35-14a6-4a98-9b28-618dfad40231/volumes" Nov 5 15:04:59.783858 (udev-worker)[4517]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:04:59.787469 systemd-networkd[1575]: cali14029c146ee: Link UP Nov 5 15:04:59.790998 systemd-networkd[1575]: cali14029c146ee: Gained carrier Nov 5 15:04:59.981942 containerd[1997]: 2025-11-05 15:04:58.589 [INFO][4562] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Nov 5 15:04:59.981942 containerd[1997]: 2025-11-05 15:04:59.481 [INFO][4562] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0 whisker-79d458847d- calico-system 7c3e0183-e5b9-4364-be32-8caba037f1e7 984 0 2025-11-05 15:04:58 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:79d458847d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-23-78 whisker-79d458847d-vcdwj eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali14029c146ee [] [] }} ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-" Nov 5 15:04:59.981942 containerd[1997]: 2025-11-05 15:04:59.481 [INFO][4562] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.981942 containerd[1997]: 2025-11-05 15:04:59.622 [INFO][4605] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" HandleID="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Workload="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.622 [INFO][4605] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" HandleID="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Workload="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001037a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-78", "pod":"whisker-79d458847d-vcdwj", "timestamp":"2025-11-05 15:04:59.622231849 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.622 [INFO][4605] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.623 [INFO][4605] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.623 [INFO][4605] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.643 [INFO][4605] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" host="ip-172-31-23-78" Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.659 [INFO][4605] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.675 [INFO][4605] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.681 [INFO][4605] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:04:59.982703 containerd[1997]: 2025-11-05 15:04:59.687 [INFO][4605] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.687 [INFO][4605] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" host="ip-172-31-23-78" Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.691 [INFO][4605] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722 Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.700 [INFO][4605] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" host="ip-172-31-23-78" Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.710 [INFO][4605] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.129/26] block=192.168.19.128/26 handle="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" host="ip-172-31-23-78" Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.710 [INFO][4605] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.129/26] handle="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" host="ip-172-31-23-78" Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.710 [INFO][4605] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:04:59.986050 containerd[1997]: 2025-11-05 15:04:59.710 [INFO][4605] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.129/26] IPv6=[] ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" HandleID="k8s-pod-network.754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Workload="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.986379 containerd[1997]: 2025-11-05 15:04:59.742 [INFO][4562] cni-plugin/k8s.go 418: Populated endpoint ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0", GenerateName:"whisker-79d458847d-", Namespace:"calico-system", SelfLink:"", UID:"7c3e0183-e5b9-4364-be32-8caba037f1e7", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d458847d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"whisker-79d458847d-vcdwj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14029c146ee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:04:59.986379 containerd[1997]: 2025-11-05 15:04:59.742 [INFO][4562] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.129/32] ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.986681 containerd[1997]: 2025-11-05 15:04:59.742 [INFO][4562] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali14029c146ee ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.986681 containerd[1997]: 2025-11-05 15:04:59.872 [INFO][4562] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:04:59.986867 containerd[1997]: 2025-11-05 15:04:59.873 [INFO][4562] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0", GenerateName:"whisker-79d458847d-", Namespace:"calico-system", SelfLink:"", UID:"7c3e0183-e5b9-4364-be32-8caba037f1e7", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 58, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"79d458847d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722", Pod:"whisker-79d458847d-vcdwj", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.19.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali14029c146ee", MAC:"86:52:84:00:c5:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:04:59.987044 containerd[1997]: 2025-11-05 15:04:59.972 [INFO][4562] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" Namespace="calico-system" Pod="whisker-79d458847d-vcdwj" WorkloadEndpoint="ip--172--31--23--78-k8s-whisker--79d458847d--vcdwj-eth0" Nov 5 15:05:00.076795 containerd[1997]: time="2025-11-05T15:05:00.076514172Z" level=info msg="connecting to shim 754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722" address="unix:///run/containerd/s/9058028214d4ef858a9e2ccbbda73244a1a117823000c5bb53986bd1f424343a" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:00.195216 systemd[1]: Started cri-containerd-754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722.scope - libcontainer container 754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722. Nov 5 15:05:00.318299 containerd[1997]: time="2025-11-05T15:05:00.318198625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-79d458847d-vcdwj,Uid:7c3e0183-e5b9-4364-be32-8caba037f1e7,Namespace:calico-system,Attempt:0,} returns sandbox id \"754bae466cc3f80a8340e2bef9c25bd1989f23294eb4b97f5de3e90dc50c9722\"" Nov 5 15:05:00.322215 containerd[1997]: time="2025-11-05T15:05:00.322156765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:05:00.695010 containerd[1997]: time="2025-11-05T15:05:00.694922691Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:00.697786 containerd[1997]: time="2025-11-05T15:05:00.697724631Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:05:00.697951 containerd[1997]: time="2025-11-05T15:05:00.697821567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:05:00.699343 kubelet[3315]: E1105 15:05:00.698517 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:00.699343 kubelet[3315]: E1105 15:05:00.698593 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:00.706874 kubelet[3315]: E1105 15:05:00.706752 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7ddee92fa141e6860da0e5d6f43cfe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:00.710103 containerd[1997]: time="2025-11-05T15:05:00.709848723Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:05:01.034460 systemd-networkd[1575]: cali14029c146ee: Gained IPv6LL Nov 5 15:05:01.175988 containerd[1997]: time="2025-11-05T15:05:01.175920277Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:01.178185 containerd[1997]: time="2025-11-05T15:05:01.178125193Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:05:01.178357 containerd[1997]: time="2025-11-05T15:05:01.178253593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:01.178707 kubelet[3315]: E1105 15:05:01.178645 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:01.179914 kubelet[3315]: E1105 15:05:01.178827 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:01.180073 kubelet[3315]: E1105 15:05:01.179104 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:01.180509 kubelet[3315]: E1105 15:05:01.180360 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:01.474155 systemd-networkd[1575]: vxlan.calico: Link UP Nov 5 15:05:01.474177 systemd-networkd[1575]: vxlan.calico: Gained carrier Nov 5 15:05:01.488439 containerd[1997]: time="2025-11-05T15:05:01.488130639Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-hxsgd,Uid:8a16765c-7214-405b-a3ab-1a750d3fae14,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:05:01.490094 containerd[1997]: time="2025-11-05T15:05:01.489883131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-qdk95,Uid:bd295bb4-9ab3-4f09-8d18-d7e16c0d217c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:05:01.492353 containerd[1997]: time="2025-11-05T15:05:01.491660019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbscs,Uid:80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572,Namespace:calico-system,Attempt:0,}" Nov 5 15:05:01.569576 (udev-worker)[4515]: Network interface NamePolicy= disabled on kernel command line. Nov 5 15:05:01.935866 systemd-networkd[1575]: calie66e8e1cc76: Link UP Nov 5 15:05:01.939945 systemd-networkd[1575]: calie66e8e1cc76: Gained carrier Nov 5 15:05:01.961805 kubelet[3315]: E1105 15:05:01.961622 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:02.005981 containerd[1997]: 2025-11-05 15:05:01.686 [INFO][4828] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0 calico-apiserver-9456ddf4d- calico-apiserver bd295bb4-9ab3-4f09-8d18-d7e16c0d217c 919 0 2025-11-05 15:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9456ddf4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-78 calico-apiserver-9456ddf4d-qdk95 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie66e8e1cc76 [] [] }} ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-" Nov 5 15:05:02.005981 containerd[1997]: 2025-11-05 15:05:01.687 [INFO][4828] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.005981 containerd[1997]: 2025-11-05 15:05:01.811 [INFO][4858] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" HandleID="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.813 [INFO][4858] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" HandleID="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000295960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-78", "pod":"calico-apiserver-9456ddf4d-qdk95", "timestamp":"2025-11-05 15:05:01.811775944 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.814 [INFO][4858] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.814 [INFO][4858] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.814 [INFO][4858] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.837 [INFO][4858] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" host="ip-172-31-23-78" Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.848 [INFO][4858] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.862 [INFO][4858] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.871 [INFO][4858] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.006314 containerd[1997]: 2025-11-05 15:05:01.882 [INFO][4858] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.882 [INFO][4858] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" host="ip-172-31-23-78" Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.888 [INFO][4858] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.900 [INFO][4858] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" host="ip-172-31-23-78" Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.917 [INFO][4858] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.130/26] block=192.168.19.128/26 handle="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" host="ip-172-31-23-78" Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.918 [INFO][4858] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.130/26] handle="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" host="ip-172-31-23-78" Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.918 [INFO][4858] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:02.006759 containerd[1997]: 2025-11-05 15:05:01.918 [INFO][4858] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.130/26] IPv6=[] ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" HandleID="k8s-pod-network.2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.007906 containerd[1997]: 2025-11-05 15:05:01.923 [INFO][4828] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0", GenerateName:"calico-apiserver-9456ddf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd295bb4-9ab3-4f09-8d18-d7e16c0d217c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9456ddf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"calico-apiserver-9456ddf4d-qdk95", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie66e8e1cc76", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.008236 containerd[1997]: 2025-11-05 15:05:01.923 [INFO][4828] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.130/32] ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.008236 containerd[1997]: 2025-11-05 15:05:01.923 [INFO][4828] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie66e8e1cc76 ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.008236 containerd[1997]: 2025-11-05 15:05:01.943 [INFO][4828] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.008413 containerd[1997]: 2025-11-05 15:05:01.951 [INFO][4828] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0", GenerateName:"calico-apiserver-9456ddf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"bd295bb4-9ab3-4f09-8d18-d7e16c0d217c", ResourceVersion:"919", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9456ddf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a", Pod:"calico-apiserver-9456ddf4d-qdk95", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie66e8e1cc76", MAC:"0a:b8:74:53:65:be", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.008548 containerd[1997]: 2025-11-05 15:05:01.998 [INFO][4828] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-qdk95" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--qdk95-eth0" Nov 5 15:05:02.095521 containerd[1997]: time="2025-11-05T15:05:02.095441462Z" level=info msg="connecting to shim 2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a" address="unix:///run/containerd/s/366e3c54278e451a38dc6308832225f905aafd513dd61edcbbd8388eb67266a6" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:02.147065 systemd-networkd[1575]: cali0d114797c33: Link UP Nov 5 15:05:02.151222 systemd-networkd[1575]: cali0d114797c33: Gained carrier Nov 5 15:05:02.209573 containerd[1997]: 2025-11-05 15:05:01.743 [INFO][4818] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0 csi-node-driver- calico-system 80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572 807 0 2025-11-05 15:04:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-78 csi-node-driver-dbscs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0d114797c33 [] [] }} ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-" Nov 5 15:05:02.209573 containerd[1997]: 2025-11-05 15:05:01.745 [INFO][4818] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.209573 containerd[1997]: 2025-11-05 15:05:01.884 [INFO][4864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" HandleID="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Workload="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:01.885 [INFO][4864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" HandleID="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Workload="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332e40), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-78", "pod":"csi-node-driver-dbscs", "timestamp":"2025-11-05 15:05:01.884562604 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:01.885 [INFO][4864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:01.918 [INFO][4864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:01.919 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:01.984 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" host="ip-172-31-23-78" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:02.021 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:02.035 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:02.058 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:02.068 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.210282 containerd[1997]: 2025-11-05 15:05:02.068 [INFO][4864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" host="ip-172-31-23-78" Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.073 [INFO][4864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.085 [INFO][4864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" host="ip-172-31-23-78" Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.106 [INFO][4864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.131/26] block=192.168.19.128/26 handle="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" host="ip-172-31-23-78" Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.107 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.131/26] handle="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" host="ip-172-31-23-78" Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.108 [INFO][4864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:02.210800 containerd[1997]: 2025-11-05 15:05:02.111 [INFO][4864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.131/26] IPv6=[] ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" HandleID="k8s-pod-network.c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Workload="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.214850 containerd[1997]: 2025-11-05 15:05:02.122 [INFO][4818] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"csi-node-driver-dbscs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d114797c33", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.215595 containerd[1997]: 2025-11-05 15:05:02.125 [INFO][4818] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.131/32] ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.215595 containerd[1997]: 2025-11-05 15:05:02.125 [INFO][4818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0d114797c33 ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.215595 containerd[1997]: 2025-11-05 15:05:02.153 [INFO][4818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.217152 containerd[1997]: 2025-11-05 15:05:02.160 [INFO][4818] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be", Pod:"csi-node-driver-dbscs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.19.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0d114797c33", MAC:"a2:79:28:37:cb:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.217307 containerd[1997]: 2025-11-05 15:05:02.193 [INFO][4818] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" Namespace="calico-system" Pod="csi-node-driver-dbscs" WorkloadEndpoint="ip--172--31--23--78-k8s-csi--node--driver--dbscs-eth0" Nov 5 15:05:02.249523 systemd[1]: Started cri-containerd-2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a.scope - libcontainer container 2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a. Nov 5 15:05:02.300863 containerd[1997]: time="2025-11-05T15:05:02.300790491Z" level=info msg="connecting to shim c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be" address="unix:///run/containerd/s/1ec362110fa7646b91edc6fe4091da26dbaddddba07a08d8d37e819445902b6c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:02.323984 systemd-networkd[1575]: cali4aefc4f801a: Link UP Nov 5 15:05:02.327232 systemd-networkd[1575]: cali4aefc4f801a: Gained carrier Nov 5 15:05:02.380423 containerd[1997]: 2025-11-05 15:05:01.776 [INFO][4816] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0 calico-apiserver-9456ddf4d- calico-apiserver 8a16765c-7214-405b-a3ab-1a750d3fae14 917 0 2025-11-05 15:04:20 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:9456ddf4d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-78 calico-apiserver-9456ddf4d-hxsgd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4aefc4f801a [] [] }} ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-" Nov 5 15:05:02.380423 containerd[1997]: 2025-11-05 15:05:01.776 [INFO][4816] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.380423 containerd[1997]: 2025-11-05 15:05:01.915 [INFO][4869] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" HandleID="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:01.915 [INFO][4869] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" HandleID="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d30f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-78", "pod":"calico-apiserver-9456ddf4d-hxsgd", "timestamp":"2025-11-05 15:05:01.915042929 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:01.915 [INFO][4869] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.108 [INFO][4869] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.110 [INFO][4869] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.171 [INFO][4869] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" host="ip-172-31-23-78" Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.200 [INFO][4869] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.221 [INFO][4869] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.232 [INFO][4869] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.381947 containerd[1997]: 2025-11-05 15:05:02.252 [INFO][4869] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.254 [INFO][4869] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" host="ip-172-31-23-78" Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.261 [INFO][4869] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447 Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.274 [INFO][4869] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" host="ip-172-31-23-78" Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.294 [INFO][4869] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.132/26] block=192.168.19.128/26 handle="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" host="ip-172-31-23-78" Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.294 [INFO][4869] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.132/26] handle="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" host="ip-172-31-23-78" Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.295 [INFO][4869] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:02.383258 containerd[1997]: 2025-11-05 15:05:02.295 [INFO][4869] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.132/26] IPv6=[] ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" HandleID="k8s-pod-network.491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Workload="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.383610 containerd[1997]: 2025-11-05 15:05:02.307 [INFO][4816] cni-plugin/k8s.go 418: Populated endpoint ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0", GenerateName:"calico-apiserver-9456ddf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a16765c-7214-405b-a3ab-1a750d3fae14", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9456ddf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"calico-apiserver-9456ddf4d-hxsgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4aefc4f801a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.383751 containerd[1997]: 2025-11-05 15:05:02.308 [INFO][4816] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.132/32] ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.383751 containerd[1997]: 2025-11-05 15:05:02.308 [INFO][4816] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4aefc4f801a ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.383751 containerd[1997]: 2025-11-05 15:05:02.337 [INFO][4816] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.383964 containerd[1997]: 2025-11-05 15:05:02.340 [INFO][4816] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0", GenerateName:"calico-apiserver-9456ddf4d-", Namespace:"calico-apiserver", SelfLink:"", UID:"8a16765c-7214-405b-a3ab-1a750d3fae14", ResourceVersion:"917", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"9456ddf4d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447", Pod:"calico-apiserver-9456ddf4d-hxsgd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4aefc4f801a", MAC:"1e:15:e2:26:d3:04", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:02.385032 containerd[1997]: 2025-11-05 15:05:02.372 [INFO][4816] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" Namespace="calico-apiserver" Pod="calico-apiserver-9456ddf4d-hxsgd" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--9456ddf4d--hxsgd-eth0" Nov 5 15:05:02.428200 systemd[1]: Started cri-containerd-c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be.scope - libcontainer container c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be. Nov 5 15:05:02.458641 containerd[1997]: time="2025-11-05T15:05:02.458574999Z" level=info msg="connecting to shim 491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447" address="unix:///run/containerd/s/a3c43362decb9a3284507c7998489324ba4175b4e3cd975cff78d1b22e1ba10b" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:02.489943 containerd[1997]: time="2025-11-05T15:05:02.487737795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f86b5b57-hhskm,Uid:88b7c103-d45c-4fa8-81a5-56483036338a,Namespace:calico-system,Attempt:0,}" Nov 5 15:05:02.566097 systemd[1]: Started cri-containerd-491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447.scope - libcontainer container 491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447. Nov 5 15:05:02.654683 containerd[1997]: time="2025-11-05T15:05:02.654604468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-qdk95,Uid:bd295bb4-9ab3-4f09-8d18-d7e16c0d217c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2c477e3685ae893331838c63b6e08087e5f8a8dc7bb2971325d77e4966ce609a\"" Nov 5 15:05:02.675362 containerd[1997]: time="2025-11-05T15:05:02.674070148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:02.720595 containerd[1997]: time="2025-11-05T15:05:02.720055841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbscs,Uid:80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4adafbf4f9953e8193f97a83761049be2ef134e6c8c40d0709a91f61bb376be\"" Nov 5 15:05:02.936469 containerd[1997]: time="2025-11-05T15:05:02.936321462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-9456ddf4d-hxsgd,Uid:8a16765c-7214-405b-a3ab-1a750d3fae14,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"491ea0e46f51f6af9f4cb50b4f5b98c628fdc0cafff646d34dce3977fbfa6447\"" Nov 5 15:05:02.956173 systemd-networkd[1575]: cali093e98d8bd0: Link UP Nov 5 15:05:02.959166 systemd-networkd[1575]: cali093e98d8bd0: Gained carrier Nov 5 15:05:02.965397 containerd[1997]: time="2025-11-05T15:05:02.965161158Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:02.970837 containerd[1997]: time="2025-11-05T15:05:02.970497390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:02.970837 containerd[1997]: time="2025-11-05T15:05:02.970569570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:02.971686 kubelet[3315]: E1105 15:05:02.971145 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:02.971686 kubelet[3315]: E1105 15:05:02.971214 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:02.971686 kubelet[3315]: E1105 15:05:02.971504 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr7wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:02.974754 kubelet[3315]: E1105 15:05:02.973323 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:02.977429 containerd[1997]: time="2025-11-05T15:05:02.976654206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:05:02.992612 kubelet[3315]: E1105 15:05:02.992557 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:03.020285 containerd[1997]: 2025-11-05 15:05:02.718 [INFO][5016] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0 calico-kube-controllers-78f86b5b57- calico-system 88b7c103-d45c-4fa8-81a5-56483036338a 918 0 2025-11-05 15:04:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78f86b5b57 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-78 calico-kube-controllers-78f86b5b57-hhskm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali093e98d8bd0 [] [] }} ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-" Nov 5 15:05:03.020285 containerd[1997]: 2025-11-05 15:05:02.721 [INFO][5016] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.020285 containerd[1997]: 2025-11-05 15:05:02.813 [INFO][5054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" HandleID="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Workload="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.815 [INFO][5054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" HandleID="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Workload="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103700), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-78", "pod":"calico-kube-controllers-78f86b5b57-hhskm", "timestamp":"2025-11-05 15:05:02.813719501 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.815 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.815 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.816 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.835 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" host="ip-172-31-23-78" Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.851 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.865 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.877 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.021021 containerd[1997]: 2025-11-05 15:05:02.887 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.888 [INFO][5054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" host="ip-172-31-23-78" Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.893 [INFO][5054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645 Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.919 [INFO][5054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" host="ip-172-31-23-78" Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.934 [INFO][5054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.133/26] block=192.168.19.128/26 handle="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" host="ip-172-31-23-78" Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.935 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.133/26] handle="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" host="ip-172-31-23-78" Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.935 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:03.021551 containerd[1997]: 2025-11-05 15:05:02.935 [INFO][5054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.133/26] IPv6=[] ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" HandleID="k8s-pod-network.b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Workload="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.023760 containerd[1997]: 2025-11-05 15:05:02.945 [INFO][5016] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0", GenerateName:"calico-kube-controllers-78f86b5b57-", Namespace:"calico-system", SelfLink:"", UID:"88b7c103-d45c-4fa8-81a5-56483036338a", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f86b5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"calico-kube-controllers-78f86b5b57-hhskm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali093e98d8bd0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:03.024461 containerd[1997]: 2025-11-05 15:05:02.946 [INFO][5016] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.133/32] ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.024461 containerd[1997]: 2025-11-05 15:05:02.946 [INFO][5016] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali093e98d8bd0 ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.024461 containerd[1997]: 2025-11-05 15:05:02.960 [INFO][5016] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.024778 containerd[1997]: 2025-11-05 15:05:02.963 [INFO][5016] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0", GenerateName:"calico-kube-controllers-78f86b5b57-", Namespace:"calico-system", SelfLink:"", UID:"88b7c103-d45c-4fa8-81a5-56483036338a", ResourceVersion:"918", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78f86b5b57", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645", Pod:"calico-kube-controllers-78f86b5b57-hhskm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.19.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali093e98d8bd0", MAC:"ea:6b:dc:ff:b1:27", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:03.025234 containerd[1997]: 2025-11-05 15:05:03.009 [INFO][5016] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" Namespace="calico-system" Pod="calico-kube-controllers-78f86b5b57-hhskm" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--kube--controllers--78f86b5b57--hhskm-eth0" Nov 5 15:05:03.114233 containerd[1997]: time="2025-11-05T15:05:03.114138759Z" level=info msg="connecting to shim b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645" address="unix:///run/containerd/s/8755a4c8efe38489bcb0af30f7edf6502d9ef98917273bd37c33624ca98d0721" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:03.146529 systemd-networkd[1575]: vxlan.calico: Gained IPv6LL Nov 5 15:05:03.175500 systemd[1]: Started cri-containerd-b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645.scope - libcontainer container b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645. Nov 5 15:05:03.265844 containerd[1997]: time="2025-11-05T15:05:03.265569219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:03.269714 containerd[1997]: time="2025-11-05T15:05:03.269542935Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:05:03.270175 containerd[1997]: time="2025-11-05T15:05:03.270121359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:05:03.270701 kubelet[3315]: E1105 15:05:03.270628 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:03.270961 kubelet[3315]: E1105 15:05:03.270920 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:03.271377 kubelet[3315]: E1105 15:05:03.271277 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:03.273605 containerd[1997]: time="2025-11-05T15:05:03.273341595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:03.368043 containerd[1997]: time="2025-11-05T15:05:03.367728700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78f86b5b57-hhskm,Uid:88b7c103-d45c-4fa8-81a5-56483036338a,Namespace:calico-system,Attempt:0,} returns sandbox id \"b29bae303238f3ffd0d5a3da23c4eacfbe58fa2cb040d98ac63c17c2ba2ab645\"" Nov 5 15:05:03.487065 containerd[1997]: time="2025-11-05T15:05:03.486945172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvxhd,Uid:9e79ac53-08d7-4495-96f5-177d69064854,Namespace:kube-system,Attempt:0,}" Nov 5 15:05:03.487388 containerd[1997]: time="2025-11-05T15:05:03.487253008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dbb9fd44-dctgw,Uid:8b23d5a1-7fb9-4412-bcea-afb711fedf9c,Namespace:calico-apiserver,Attempt:0,}" Nov 5 15:05:03.533108 systemd-networkd[1575]: cali0d114797c33: Gained IPv6LL Nov 5 15:05:03.873394 systemd-networkd[1575]: cali20c6b1eb5b7: Link UP Nov 5 15:05:03.876099 systemd-networkd[1575]: cali20c6b1eb5b7: Gained carrier Nov 5 15:05:03.877943 containerd[1997]: time="2025-11-05T15:05:03.877290450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:03.887031 containerd[1997]: time="2025-11-05T15:05:03.882847626Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:03.889582 kubelet[3315]: E1105 15:05:03.889335 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:03.889582 kubelet[3315]: E1105 15:05:03.889402 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:03.890960 kubelet[3315]: E1105 15:05:03.889740 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7czzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:03.892629 containerd[1997]: time="2025-11-05T15:05:03.882956034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:03.892629 containerd[1997]: time="2025-11-05T15:05:03.890428374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:05:03.895151 kubelet[3315]: E1105 15:05:03.891723 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:03.915012 systemd-networkd[1575]: calie66e8e1cc76: Gained IPv6LL Nov 5 15:05:03.967550 containerd[1997]: 2025-11-05 15:05:03.664 [INFO][5155] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0 calico-apiserver-84dbb9fd44- calico-apiserver 8b23d5a1-7fb9-4412-bcea-afb711fedf9c 920 0 2025-11-05 15:04:24 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:84dbb9fd44 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-78 calico-apiserver-84dbb9fd44-dctgw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali20c6b1eb5b7 [] [] }} ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-" Nov 5 15:05:03.967550 containerd[1997]: 2025-11-05 15:05:03.666 [INFO][5155] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.967550 containerd[1997]: 2025-11-05 15:05:03.760 [INFO][5173] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" HandleID="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Workload="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.761 [INFO][5173] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" HandleID="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Workload="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d39b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-78", "pod":"calico-apiserver-84dbb9fd44-dctgw", "timestamp":"2025-11-05 15:05:03.760860066 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.761 [INFO][5173] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.761 [INFO][5173] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.761 [INFO][5173] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.778 [INFO][5173] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" host="ip-172-31-23-78" Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.791 [INFO][5173] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.798 [INFO][5173] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.802 [INFO][5173] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.968110 containerd[1997]: 2025-11-05 15:05:03.806 [INFO][5173] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.806 [INFO][5173] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" host="ip-172-31-23-78" Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.809 [INFO][5173] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926 Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.824 [INFO][5173] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" host="ip-172-31-23-78" Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.851 [INFO][5173] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.134/26] block=192.168.19.128/26 handle="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" host="ip-172-31-23-78" Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.851 [INFO][5173] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.134/26] handle="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" host="ip-172-31-23-78" Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.851 [INFO][5173] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:03.970240 containerd[1997]: 2025-11-05 15:05:03.851 [INFO][5173] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.134/26] IPv6=[] ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" HandleID="k8s-pod-network.f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Workload="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.970603 containerd[1997]: 2025-11-05 15:05:03.859 [INFO][5155] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0", GenerateName:"calico-apiserver-84dbb9fd44-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b23d5a1-7fb9-4412-bcea-afb711fedf9c", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dbb9fd44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"calico-apiserver-84dbb9fd44-dctgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20c6b1eb5b7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:03.970757 containerd[1997]: 2025-11-05 15:05:03.859 [INFO][5155] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.134/32] ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.970757 containerd[1997]: 2025-11-05 15:05:03.859 [INFO][5155] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20c6b1eb5b7 ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.970757 containerd[1997]: 2025-11-05 15:05:03.875 [INFO][5155] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.972049 containerd[1997]: 2025-11-05 15:05:03.885 [INFO][5155] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0", GenerateName:"calico-apiserver-84dbb9fd44-", Namespace:"calico-apiserver", SelfLink:"", UID:"8b23d5a1-7fb9-4412-bcea-afb711fedf9c", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"84dbb9fd44", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926", Pod:"calico-apiserver-84dbb9fd44-dctgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.19.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali20c6b1eb5b7", MAC:"ae:82:5e:e2:57:fb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:03.972267 containerd[1997]: 2025-11-05 15:05:03.962 [INFO][5155] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" Namespace="calico-apiserver" Pod="calico-apiserver-84dbb9fd44-dctgw" WorkloadEndpoint="ip--172--31--23--78-k8s-calico--apiserver--84dbb9fd44--dctgw-eth0" Nov 5 15:05:03.999653 kubelet[3315]: E1105 15:05:03.999201 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:04.002551 kubelet[3315]: E1105 15:05:04.002393 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:04.037382 systemd[1]: Started sshd@7-172.31.23.78:22-139.178.89.65:54538.service - OpenSSH per-connection server daemon (139.178.89.65:54538). Nov 5 15:05:04.064266 containerd[1997]: time="2025-11-05T15:05:04.064193703Z" level=info msg="connecting to shim f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926" address="unix:///run/containerd/s/3770a08d62192f2b27a02ebb9665d6fb89622f1bf9e0ed7f50cf8e6f1fac1c99" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:04.176616 systemd-networkd[1575]: cali093e98d8bd0: Gained IPv6LL Nov 5 15:05:04.228247 systemd[1]: Started cri-containerd-f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926.scope - libcontainer container f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926. Nov 5 15:05:04.298091 systemd-networkd[1575]: cali4aefc4f801a: Gained IPv6LL Nov 5 15:05:04.339008 systemd-networkd[1575]: cali3a51143cf7e: Link UP Nov 5 15:05:04.344934 systemd-networkd[1575]: cali3a51143cf7e: Gained carrier Nov 5 15:05:04.391647 sshd[5202]: Accepted publickey for core from 139.178.89.65 port 54538 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:04.399613 sshd-session[5202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:04.401408 containerd[1997]: 2025-11-05 15:05:03.680 [INFO][5153] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0 coredns-674b8bbfcf- kube-system 9e79ac53-08d7-4495-96f5-177d69064854 916 0 2025-11-05 15:04:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-78 coredns-674b8bbfcf-rvxhd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3a51143cf7e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-" Nov 5 15:05:04.401408 containerd[1997]: 2025-11-05 15:05:03.680 [INFO][5153] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.401408 containerd[1997]: 2025-11-05 15:05:03.764 [INFO][5178] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" HandleID="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:03.765 [INFO][5178] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" HandleID="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3050), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-78", "pod":"coredns-674b8bbfcf-rvxhd", "timestamp":"2025-11-05 15:05:03.764310006 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:03.765 [INFO][5178] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:03.852 [INFO][5178] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:03.852 [INFO][5178] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:03.898 [INFO][5178] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" host="ip-172-31-23-78" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:04.050 [INFO][5178] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:04.143 [INFO][5178] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:04.176 [INFO][5178] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:04.195 [INFO][5178] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:04.403414 containerd[1997]: 2025-11-05 15:05:04.195 [INFO][5178] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" host="ip-172-31-23-78" Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.256 [INFO][5178] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147 Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.271 [INFO][5178] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" host="ip-172-31-23-78" Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.310 [INFO][5178] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.135/26] block=192.168.19.128/26 handle="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" host="ip-172-31-23-78" Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.314 [INFO][5178] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.135/26] handle="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" host="ip-172-31-23-78" Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.314 [INFO][5178] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:04.406692 containerd[1997]: 2025-11-05 15:05:04.314 [INFO][5178] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.135/26] IPv6=[] ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" HandleID="k8s-pod-network.0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.326 [INFO][5153] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e79ac53-08d7-4495-96f5-177d69064854", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"coredns-674b8bbfcf-rvxhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a51143cf7e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.326 [INFO][5153] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.135/32] ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.326 [INFO][5153] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3a51143cf7e ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.342 [INFO][5153] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.344 [INFO][5153] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9e79ac53-08d7-4495-96f5-177d69064854", ResourceVersion:"916", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147", Pod:"coredns-674b8bbfcf-rvxhd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3a51143cf7e", MAC:"fa:5d:8b:30:01:1d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:04.408502 containerd[1997]: 2025-11-05 15:05:04.394 [INFO][5153] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" Namespace="kube-system" Pod="coredns-674b8bbfcf-rvxhd" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--rvxhd-eth0" Nov 5 15:05:04.421315 systemd-logind[1950]: New session 8 of user core. Nov 5 15:05:04.431349 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 15:05:04.489700 containerd[1997]: time="2025-11-05T15:05:04.489648269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q7wlk,Uid:ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7,Namespace:calico-system,Attempt:0,}" Nov 5 15:05:04.510932 containerd[1997]: time="2025-11-05T15:05:04.510667686Z" level=info msg="connecting to shim 0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147" address="unix:///run/containerd/s/ff3013c96c572b2695440801c1730bd8ff1094d78c8cb024c4f9a407b8ab0832" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:04.633161 containerd[1997]: time="2025-11-05T15:05:04.632992938Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:04.645733 containerd[1997]: time="2025-11-05T15:05:04.644728986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:05:04.645287 systemd[1]: Started cri-containerd-0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147.scope - libcontainer container 0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147. Nov 5 15:05:04.654934 containerd[1997]: time="2025-11-05T15:05:04.653096634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:05:04.655417 kubelet[3315]: E1105 15:05:04.655332 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:04.655748 kubelet[3315]: E1105 15:05:04.655512 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:04.657117 kubelet[3315]: E1105 15:05:04.656277 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:04.660372 kubelet[3315]: E1105 15:05:04.660129 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:04.662346 containerd[1997]: time="2025-11-05T15:05:04.661779678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:05:04.914824 containerd[1997]: time="2025-11-05T15:05:04.914744792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-rvxhd,Uid:9e79ac53-08d7-4495-96f5-177d69064854,Namespace:kube-system,Attempt:0,} returns sandbox id \"0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147\"" Nov 5 15:05:04.943493 containerd[1997]: time="2025-11-05T15:05:04.943212128Z" level=info msg="CreateContainer within sandbox \"0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:05:04.982052 containerd[1997]: time="2025-11-05T15:05:04.981976916Z" level=info msg="Container 33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:05:04.990132 sshd[5242]: Connection closed by 139.178.89.65 port 54538 Nov 5 15:05:04.996426 sshd-session[5202]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:05.000598 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164711744.mount: Deactivated successfully. Nov 5 15:05:05.007223 containerd[1997]: time="2025-11-05T15:05:05.007097872Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:05.014319 containerd[1997]: time="2025-11-05T15:05:05.013938148Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:05:05.015387 containerd[1997]: time="2025-11-05T15:05:05.015330328Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:05.017767 systemd[1]: sshd@7-172.31.23.78:22-139.178.89.65:54538.service: Deactivated successfully. Nov 5 15:05:05.018504 kubelet[3315]: E1105 15:05:05.018361 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:05.023985 kubelet[3315]: E1105 15:05:05.019226 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:05.023985 kubelet[3315]: E1105 15:05:05.023665 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dnpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:05.026561 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 15:05:05.030964 containerd[1997]: time="2025-11-05T15:05:05.030550192Z" level=info msg="CreateContainer within sandbox \"0b39fdfb11d50af0f4758e07080f26ee40f45cba07548d65008caff82a964147\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739\"" Nov 5 15:05:05.031120 kubelet[3315]: E1105 15:05:05.030732 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:05.031824 systemd-logind[1950]: Session 8 logged out. Waiting for processes to exit. Nov 5 15:05:05.038582 systemd-logind[1950]: Removed session 8. Nov 5 15:05:05.039578 kubelet[3315]: E1105 15:05:05.039504 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:05.042525 containerd[1997]: time="2025-11-05T15:05:05.042445348Z" level=info msg="StartContainer for \"33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739\"" Nov 5 15:05:05.051094 containerd[1997]: time="2025-11-05T15:05:05.050465728Z" level=info msg="connecting to shim 33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739" address="unix:///run/containerd/s/ff3013c96c572b2695440801c1730bd8ff1094d78c8cb024c4f9a407b8ab0832" protocol=ttrpc version=3 Nov 5 15:05:05.184210 systemd-networkd[1575]: calib0e730f96b9: Link UP Nov 5 15:05:05.186469 systemd-networkd[1575]: calib0e730f96b9: Gained carrier Nov 5 15:05:05.196812 containerd[1997]: time="2025-11-05T15:05:05.196727213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-84dbb9fd44-dctgw,Uid:8b23d5a1-7fb9-4412-bcea-afb711fedf9c,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f316bff7a4ac634a20b54faa05793dfc2369ab44a2ec4e549b98cc321fa59926\"" Nov 5 15:05:05.210530 containerd[1997]: time="2025-11-05T15:05:05.210484637Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:05.232456 systemd[1]: Started cri-containerd-33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739.scope - libcontainer container 33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739. Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.781 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0 goldmane-666569f655- calico-system ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7 921 0 2025-11-05 15:04:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-78 goldmane-666569f655-q7wlk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calib0e730f96b9 [] [] }} ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.782 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.867 [INFO][5314] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" HandleID="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Workload="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.869 [INFO][5314] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" HandleID="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Workload="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa120), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-78", "pod":"goldmane-666569f655-q7wlk", "timestamp":"2025-11-05 15:05:04.867706123 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.869 [INFO][5314] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.870 [INFO][5314] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.873 [INFO][5314] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.925 [INFO][5314] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.949 [INFO][5314] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.966 [INFO][5314] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:04.976 [INFO][5314] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.004 [INFO][5314] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.004 [INFO][5314] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.014 [INFO][5314] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.055 [INFO][5314] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.097 [INFO][5314] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.136/26] block=192.168.19.128/26 handle="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.100 [INFO][5314] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.136/26] handle="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" host="ip-172-31-23-78" Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.104 [INFO][5314] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:05.240067 containerd[1997]: 2025-11-05 15:05:05.109 [INFO][5314] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.136/26] IPv6=[] ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" HandleID="k8s-pod-network.6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Workload="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.141 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"goldmane-666569f655-q7wlk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0e730f96b9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.144 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.136/32] ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.144 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib0e730f96b9 ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.190 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.191 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7", ResourceVersion:"921", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d", Pod:"goldmane-666569f655-q7wlk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.19.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calib0e730f96b9", MAC:"1e:56:89:a4:f5:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:05.243361 containerd[1997]: 2025-11-05 15:05:05.230 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" Namespace="calico-system" Pod="goldmane-666569f655-q7wlk" WorkloadEndpoint="ip--172--31--23--78-k8s-goldmane--666569f655--q7wlk-eth0" Nov 5 15:05:05.319724 containerd[1997]: time="2025-11-05T15:05:05.319638954Z" level=info msg="connecting to shim 6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d" address="unix:///run/containerd/s/f41c2fe9e7f35d8e7d9dcfc15f9fc5eaacf8adaae8f2f7a4981fc74ad6f4314d" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:05.367208 containerd[1997]: time="2025-11-05T15:05:05.367146930Z" level=info msg="StartContainer for \"33d235a638df393019f7b31ec2fc2f3c179a77ce9d3e95fdb948ced97541d739\" returns successfully" Nov 5 15:05:05.444190 systemd[1]: Started cri-containerd-6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d.scope - libcontainer container 6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d. Nov 5 15:05:05.451198 systemd-networkd[1575]: cali3a51143cf7e: Gained IPv6LL Nov 5 15:05:05.503597 containerd[1997]: time="2025-11-05T15:05:05.503547954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt62s,Uid:d480047e-a8f5-4d50-b3b1-cda61de6f2e4,Namespace:kube-system,Attempt:0,}" Nov 5 15:05:05.526902 containerd[1997]: time="2025-11-05T15:05:05.525973303Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:05.529290 containerd[1997]: time="2025-11-05T15:05:05.528437143Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:05.529688 containerd[1997]: time="2025-11-05T15:05:05.529093855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:05.530965 kubelet[3315]: E1105 15:05:05.530168 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:05.530965 kubelet[3315]: E1105 15:05:05.530238 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:05.532226 kubelet[3315]: E1105 15:05:05.532125 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49sgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:05.533654 kubelet[3315]: E1105 15:05:05.533565 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:05.643697 systemd-networkd[1575]: cali20c6b1eb5b7: Gained IPv6LL Nov 5 15:05:05.653571 containerd[1997]: time="2025-11-05T15:05:05.653128231Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q7wlk,Uid:ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7,Namespace:calico-system,Attempt:0,} returns sandbox id \"6eaa4715bf3914d0c050e50100d0cef418ddd2ce7590416cbf332984563f998d\"" Nov 5 15:05:05.662960 containerd[1997]: time="2025-11-05T15:05:05.662438767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:05:05.863306 systemd-networkd[1575]: cali98d1b6da41d: Link UP Nov 5 15:05:05.870129 systemd-networkd[1575]: cali98d1b6da41d: Gained carrier Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.653 [INFO][5430] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0 coredns-674b8bbfcf- kube-system d480047e-a8f5-4d50-b3b1-cda61de6f2e4 915 0 2025-11-05 15:04:03 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-78 coredns-674b8bbfcf-nt62s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali98d1b6da41d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.654 [INFO][5430] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.788 [INFO][5448] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" HandleID="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.789 [INFO][5448] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" HandleID="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400037c830), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-78", "pod":"coredns-674b8bbfcf-nt62s", "timestamp":"2025-11-05 15:05:05.788702012 +0000 UTC"}, Hostname:"ip-172-31-23-78", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.789 [INFO][5448] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.789 [INFO][5448] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.789 [INFO][5448] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-78' Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.803 [INFO][5448] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.812 [INFO][5448] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.819 [INFO][5448] ipam/ipam.go 511: Trying affinity for 192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.823 [INFO][5448] ipam/ipam.go 158: Attempting to load block cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.827 [INFO][5448] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.19.128/26 host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.828 [INFO][5448] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.19.128/26 handle="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.830 [INFO][5448] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161 Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.837 [INFO][5448] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.19.128/26 handle="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.852 [INFO][5448] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.19.137/26] block=192.168.19.128/26 handle="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.852 [INFO][5448] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.19.137/26] handle="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" host="ip-172-31-23-78" Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.853 [INFO][5448] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Nov 5 15:05:05.900565 containerd[1997]: 2025-11-05 15:05:05.853 [INFO][5448] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.19.137/26] IPv6=[] ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" HandleID="k8s-pod-network.bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Workload="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.857 [INFO][5430] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d480047e-a8f5-4d50-b3b1-cda61de6f2e4", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"", Pod:"coredns-674b8bbfcf-nt62s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98d1b6da41d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.857 [INFO][5430] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.19.137/32] ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.857 [INFO][5430] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali98d1b6da41d ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.867 [INFO][5430] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.872 [INFO][5430] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"d480047e-a8f5-4d50-b3b1-cda61de6f2e4", ResourceVersion:"915", Generation:0, CreationTimestamp:time.Date(2025, time.November, 5, 15, 4, 3, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-78", ContainerID:"bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161", Pod:"coredns-674b8bbfcf-nt62s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.19.137/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali98d1b6da41d", MAC:"22:ec:15:c1:af:76", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Nov 5 15:05:05.903832 containerd[1997]: 2025-11-05 15:05:05.894 [INFO][5430] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" Namespace="kube-system" Pod="coredns-674b8bbfcf-nt62s" WorkloadEndpoint="ip--172--31--23--78-k8s-coredns--674b8bbfcf--nt62s-eth0" Nov 5 15:05:05.955913 containerd[1997]: time="2025-11-05T15:05:05.955488669Z" level=info msg="connecting to shim bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161" address="unix:///run/containerd/s/453d8f2c59b8ad28f32bfb91ebc46860ccfe390094f36ad6a86dea0ebd262a88" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:05:05.986601 containerd[1997]: time="2025-11-05T15:05:05.986499069Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:05.989676 containerd[1997]: time="2025-11-05T15:05:05.989334213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:05:05.989676 containerd[1997]: time="2025-11-05T15:05:05.989544237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:05.990283 kubelet[3315]: E1105 15:05:05.990135 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:05.990283 kubelet[3315]: E1105 15:05:05.990201 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:05.993463 kubelet[3315]: E1105 15:05:05.992856 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnp2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:05.994998 kubelet[3315]: E1105 15:05:05.994778 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:06.021200 systemd[1]: Started cri-containerd-bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161.scope - libcontainer container bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161. Nov 5 15:05:06.040992 kubelet[3315]: E1105 15:05:06.040338 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:06.054254 kubelet[3315]: E1105 15:05:06.054189 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:06.061556 kubelet[3315]: E1105 15:05:06.061441 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:06.161395 kubelet[3315]: I1105 15:05:06.161208 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-rvxhd" podStartSLOduration=63.16115955 podStartE2EDuration="1m3.16115955s" podCreationTimestamp="2025-11-05 15:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:05:06.158554458 +0000 UTC m=+66.971074378" watchObservedRunningTime="2025-11-05 15:05:06.16115955 +0000 UTC m=+66.973679470" Nov 5 15:05:06.195206 containerd[1997]: time="2025-11-05T15:05:06.195091614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-nt62s,Uid:d480047e-a8f5-4d50-b3b1-cda61de6f2e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161\"" Nov 5 15:05:06.214244 containerd[1997]: time="2025-11-05T15:05:06.214118802Z" level=info msg="CreateContainer within sandbox \"bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 15:05:06.238590 containerd[1997]: time="2025-11-05T15:05:06.238452606Z" level=info msg="Container 15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:05:06.267261 containerd[1997]: time="2025-11-05T15:05:06.266976546Z" level=info msg="CreateContainer within sandbox \"bc8c60a40e05e11bc30b1f0e0669bba761ddc3bd016c8f08b38a8c22c659a161\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de\"" Nov 5 15:05:06.268750 containerd[1997]: time="2025-11-05T15:05:06.268702842Z" level=info msg="StartContainer for \"15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de\"" Nov 5 15:05:06.274100 containerd[1997]: time="2025-11-05T15:05:06.274030518Z" level=info msg="connecting to shim 15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de" address="unix:///run/containerd/s/453d8f2c59b8ad28f32bfb91ebc46860ccfe390094f36ad6a86dea0ebd262a88" protocol=ttrpc version=3 Nov 5 15:05:06.334280 systemd[1]: Started cri-containerd-15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de.scope - libcontainer container 15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de. Nov 5 15:05:06.346388 systemd-networkd[1575]: calib0e730f96b9: Gained IPv6LL Nov 5 15:05:06.402072 containerd[1997]: time="2025-11-05T15:05:06.401866735Z" level=info msg="StartContainer for \"15c22dc54fb4358358db173a180c04cca90ec858d3d21b6abbe90de818b582de\" returns successfully" Nov 5 15:05:07.068236 kubelet[3315]: E1105 15:05:07.068165 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:07.071543 kubelet[3315]: E1105 15:05:07.069859 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:07.145023 kubelet[3315]: I1105 15:05:07.144279 3315 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-nt62s" podStartSLOduration=64.144255019 podStartE2EDuration="1m4.144255019s" podCreationTimestamp="2025-11-05 15:04:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:05:07.117621954 +0000 UTC m=+67.930141898" watchObservedRunningTime="2025-11-05 15:05:07.144255019 +0000 UTC m=+67.956774927" Nov 5 15:05:07.370165 systemd-networkd[1575]: cali98d1b6da41d: Gained IPv6LL Nov 5 15:05:10.021293 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.19.128:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 6 vxlan.calico 192.168.19.128:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 7 cali14029c146ee [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::64a5:c4ff:fe13:e428%5]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 9 calie66e8e1cc76 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 10 cali0d114797c33 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 11 cali4aefc4f801a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 12 cali093e98d8bd0 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 13 cali20c6b1eb5b7 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 14 cali3a51143cf7e [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 15 calib0e730f96b9 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 15:05:10.022403 ntpd[1941]: 5 Nov 15:05:10 ntpd[1941]: Listen normally on 16 cali98d1b6da41d [fe80::ecee:eeff:feee:eeee%15]:123 Nov 5 15:05:10.021381 ntpd[1941]: Listen normally on 7 cali14029c146ee [fe80::ecee:eeff:feee:eeee%4]:123 Nov 5 15:05:10.021429 ntpd[1941]: Listen normally on 8 vxlan.calico [fe80::64a5:c4ff:fe13:e428%5]:123 Nov 5 15:05:10.021474 ntpd[1941]: Listen normally on 9 calie66e8e1cc76 [fe80::ecee:eeff:feee:eeee%8]:123 Nov 5 15:05:10.021523 ntpd[1941]: Listen normally on 10 cali0d114797c33 [fe80::ecee:eeff:feee:eeee%9]:123 Nov 5 15:05:10.021567 ntpd[1941]: Listen normally on 11 cali4aefc4f801a [fe80::ecee:eeff:feee:eeee%10]:123 Nov 5 15:05:10.021610 ntpd[1941]: Listen normally on 12 cali093e98d8bd0 [fe80::ecee:eeff:feee:eeee%11]:123 Nov 5 15:05:10.021664 ntpd[1941]: Listen normally on 13 cali20c6b1eb5b7 [fe80::ecee:eeff:feee:eeee%12]:123 Nov 5 15:05:10.021714 ntpd[1941]: Listen normally on 14 cali3a51143cf7e [fe80::ecee:eeff:feee:eeee%13]:123 Nov 5 15:05:10.021757 ntpd[1941]: Listen normally on 15 calib0e730f96b9 [fe80::ecee:eeff:feee:eeee%14]:123 Nov 5 15:05:10.021804 ntpd[1941]: Listen normally on 16 cali98d1b6da41d [fe80::ecee:eeff:feee:eeee%15]:123 Nov 5 15:05:10.032414 systemd[1]: Started sshd@8-172.31.23.78:22-139.178.89.65:51468.service - OpenSSH per-connection server daemon (139.178.89.65:51468). Nov 5 15:05:10.238216 sshd[5562]: Accepted publickey for core from 139.178.89.65 port 51468 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:10.242630 sshd-session[5562]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:10.256365 systemd-logind[1950]: New session 9 of user core. Nov 5 15:05:10.265533 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 15:05:10.533475 sshd[5565]: Connection closed by 139.178.89.65 port 51468 Nov 5 15:05:10.534184 sshd-session[5562]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:10.542767 systemd[1]: sshd@8-172.31.23.78:22-139.178.89.65:51468.service: Deactivated successfully. Nov 5 15:05:10.548057 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 15:05:10.551425 systemd-logind[1950]: Session 9 logged out. Waiting for processes to exit. Nov 5 15:05:10.555731 systemd-logind[1950]: Removed session 9. Nov 5 15:05:15.573773 systemd[1]: Started sshd@9-172.31.23.78:22-139.178.89.65:51480.service - OpenSSH per-connection server daemon (139.178.89.65:51480). Nov 5 15:05:15.785419 sshd[5585]: Accepted publickey for core from 139.178.89.65 port 51480 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:15.788637 sshd-session[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:15.798973 systemd-logind[1950]: New session 10 of user core. Nov 5 15:05:15.809182 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 15:05:16.076395 sshd[5588]: Connection closed by 139.178.89.65 port 51480 Nov 5 15:05:16.077316 sshd-session[5585]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:16.085420 systemd[1]: sshd@9-172.31.23.78:22-139.178.89.65:51480.service: Deactivated successfully. Nov 5 15:05:16.091200 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 15:05:16.096606 systemd-logind[1950]: Session 10 logged out. Waiting for processes to exit. Nov 5 15:05:16.099498 systemd-logind[1950]: Removed session 10. Nov 5 15:05:16.489299 containerd[1997]: time="2025-11-05T15:05:16.489229949Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:16.883363 containerd[1997]: time="2025-11-05T15:05:16.883197187Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:16.886318 containerd[1997]: time="2025-11-05T15:05:16.886240171Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:16.886457 containerd[1997]: time="2025-11-05T15:05:16.886360303Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:16.886929 kubelet[3315]: E1105 15:05:16.886732 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:16.886929 kubelet[3315]: E1105 15:05:16.886796 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:16.888087 kubelet[3315]: E1105 15:05:16.887168 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr7wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:16.888280 containerd[1997]: time="2025-11-05T15:05:16.887641303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:05:16.890336 kubelet[3315]: E1105 15:05:16.889318 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:17.174658 containerd[1997]: time="2025-11-05T15:05:17.174577900Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:17.177368 containerd[1997]: time="2025-11-05T15:05:17.177300304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:05:17.177464 containerd[1997]: time="2025-11-05T15:05:17.177412528Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:05:17.177675 kubelet[3315]: E1105 15:05:17.177617 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:17.177757 kubelet[3315]: E1105 15:05:17.177689 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:17.177948 kubelet[3315]: E1105 15:05:17.177846 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7ddee92fa141e6860da0e5d6f43cfe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:17.182911 containerd[1997]: time="2025-11-05T15:05:17.182708956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:05:17.490790 containerd[1997]: time="2025-11-05T15:05:17.490406730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:17.496626 containerd[1997]: time="2025-11-05T15:05:17.496427250Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:05:17.499000 containerd[1997]: time="2025-11-05T15:05:17.498590622Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:17.500215 kubelet[3315]: E1105 15:05:17.500165 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:17.500773 kubelet[3315]: E1105 15:05:17.500718 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:17.501626 containerd[1997]: time="2025-11-05T15:05:17.501563634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:17.502206 kubelet[3315]: E1105 15:05:17.502074 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:17.507232 kubelet[3315]: E1105 15:05:17.505824 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:17.797137 containerd[1997]: time="2025-11-05T15:05:17.796965524Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:17.800142 containerd[1997]: time="2025-11-05T15:05:17.800088980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:17.800477 containerd[1997]: time="2025-11-05T15:05:17.800387420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:17.800993 kubelet[3315]: E1105 15:05:17.800934 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:17.801098 kubelet[3315]: E1105 15:05:17.801025 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:17.801571 containerd[1997]: time="2025-11-05T15:05:17.801527576Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:05:17.802192 kubelet[3315]: E1105 15:05:17.802014 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7czzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:17.803347 kubelet[3315]: E1105 15:05:17.803272 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:18.094936 containerd[1997]: time="2025-11-05T15:05:18.094719989Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:18.097649 containerd[1997]: time="2025-11-05T15:05:18.097578257Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:05:18.097784 containerd[1997]: time="2025-11-05T15:05:18.097711349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:05:18.098018 kubelet[3315]: E1105 15:05:18.097943 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:18.099029 kubelet[3315]: E1105 15:05:18.098013 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:18.099029 kubelet[3315]: E1105 15:05:18.098277 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:18.099710 containerd[1997]: time="2025-11-05T15:05:18.099633461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:05:18.369531 containerd[1997]: time="2025-11-05T15:05:18.369363774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:18.371765 containerd[1997]: time="2025-11-05T15:05:18.371691762Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:05:18.371912 containerd[1997]: time="2025-11-05T15:05:18.371826918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:18.372172 kubelet[3315]: E1105 15:05:18.372038 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:18.372172 kubelet[3315]: E1105 15:05:18.372095 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:18.373054 containerd[1997]: time="2025-11-05T15:05:18.372507354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:05:18.374000 kubelet[3315]: E1105 15:05:18.373550 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnp2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:18.376045 kubelet[3315]: E1105 15:05:18.375619 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:18.648386 containerd[1997]: time="2025-11-05T15:05:18.648235256Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:18.651690 containerd[1997]: time="2025-11-05T15:05:18.651603740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:05:18.652098 containerd[1997]: time="2025-11-05T15:05:18.651745388Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:18.652448 kubelet[3315]: E1105 15:05:18.651979 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:18.652448 kubelet[3315]: E1105 15:05:18.652041 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:18.652448 kubelet[3315]: E1105 15:05:18.652345 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dnpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:18.654479 kubelet[3315]: E1105 15:05:18.654125 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:18.654663 containerd[1997]: time="2025-11-05T15:05:18.654168704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:05:18.948434 containerd[1997]: time="2025-11-05T15:05:18.948241029Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:18.950557 containerd[1997]: time="2025-11-05T15:05:18.950425485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:05:18.950557 containerd[1997]: time="2025-11-05T15:05:18.950511201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:05:18.950778 kubelet[3315]: E1105 15:05:18.950699 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:18.951024 kubelet[3315]: E1105 15:05:18.950767 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:18.951124 kubelet[3315]: E1105 15:05:18.950980 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:18.952758 kubelet[3315]: E1105 15:05:18.952680 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:20.489027 containerd[1997]: time="2025-11-05T15:05:20.488964861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:20.829502 containerd[1997]: time="2025-11-05T15:05:20.829347275Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:20.832226 containerd[1997]: time="2025-11-05T15:05:20.832080527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:20.832226 containerd[1997]: time="2025-11-05T15:05:20.832175903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:20.832499 kubelet[3315]: E1105 15:05:20.832364 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:20.832499 kubelet[3315]: E1105 15:05:20.832423 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:20.834209 kubelet[3315]: E1105 15:05:20.832611 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49sgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:20.834462 kubelet[3315]: E1105 15:05:20.834264 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:21.118238 systemd[1]: Started sshd@10-172.31.23.78:22-139.178.89.65:40336.service - OpenSSH per-connection server daemon (139.178.89.65:40336). Nov 5 15:05:21.315411 sshd[5604]: Accepted publickey for core from 139.178.89.65 port 40336 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:21.317753 sshd-session[5604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:21.326254 systemd-logind[1950]: New session 11 of user core. Nov 5 15:05:21.341179 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 15:05:21.609051 sshd[5607]: Connection closed by 139.178.89.65 port 40336 Nov 5 15:05:21.610118 sshd-session[5604]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:21.620082 systemd[1]: sshd@10-172.31.23.78:22-139.178.89.65:40336.service: Deactivated successfully. Nov 5 15:05:21.627440 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 15:05:21.633349 systemd-logind[1950]: Session 11 logged out. Waiting for processes to exit. Nov 5 15:05:21.656971 systemd[1]: Started sshd@11-172.31.23.78:22-139.178.89.65:40348.service - OpenSSH per-connection server daemon (139.178.89.65:40348). Nov 5 15:05:21.659340 systemd-logind[1950]: Removed session 11. Nov 5 15:05:21.863552 sshd[5619]: Accepted publickey for core from 139.178.89.65 port 40348 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:21.866207 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:21.876149 systemd-logind[1950]: New session 12 of user core. Nov 5 15:05:21.892190 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 15:05:22.272338 sshd[5622]: Connection closed by 139.178.89.65 port 40348 Nov 5 15:05:22.273221 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:22.282732 systemd-logind[1950]: Session 12 logged out. Waiting for processes to exit. Nov 5 15:05:22.283970 systemd[1]: sshd@11-172.31.23.78:22-139.178.89.65:40348.service: Deactivated successfully. Nov 5 15:05:22.292422 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 15:05:22.320561 systemd-logind[1950]: Removed session 12. Nov 5 15:05:22.324196 systemd[1]: Started sshd@12-172.31.23.78:22-139.178.89.65:40350.service - OpenSSH per-connection server daemon (139.178.89.65:40350). Nov 5 15:05:22.536997 sshd[5631]: Accepted publickey for core from 139.178.89.65 port 40350 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:22.539107 sshd-session[5631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:22.548991 systemd-logind[1950]: New session 13 of user core. Nov 5 15:05:22.554178 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 15:05:22.821989 sshd[5634]: Connection closed by 139.178.89.65 port 40350 Nov 5 15:05:22.823637 sshd-session[5631]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:22.830991 systemd-logind[1950]: Session 13 logged out. Waiting for processes to exit. Nov 5 15:05:22.831221 systemd[1]: sshd@12-172.31.23.78:22-139.178.89.65:40350.service: Deactivated successfully. Nov 5 15:05:22.836595 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 15:05:22.844574 systemd-logind[1950]: Removed session 13. Nov 5 15:05:27.860318 systemd[1]: Started sshd@13-172.31.23.78:22-139.178.89.65:47630.service - OpenSSH per-connection server daemon (139.178.89.65:47630). Nov 5 15:05:28.059627 sshd[5659]: Accepted publickey for core from 139.178.89.65 port 47630 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:28.061982 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:28.072967 systemd-logind[1950]: New session 14 of user core. Nov 5 15:05:28.083437 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:05:28.344636 sshd[5662]: Connection closed by 139.178.89.65 port 47630 Nov 5 15:05:28.345682 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:28.353186 systemd[1]: sshd@13-172.31.23.78:22-139.178.89.65:47630.service: Deactivated successfully. Nov 5 15:05:28.359709 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:05:28.362576 systemd-logind[1950]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:05:28.368038 systemd-logind[1950]: Removed session 14. Nov 5 15:05:28.488306 kubelet[3315]: E1105 15:05:28.488064 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:29.080773 containerd[1997]: time="2025-11-05T15:05:29.080714056Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" id:\"a9a85395056f8699653c20a78148876f3ae71ba0577d7e5ca3634f3891fa9802\" pid:5684 exit_status:1 exited_at:{seconds:1762355129 nanos:80324392}" Nov 5 15:05:29.490516 kubelet[3315]: E1105 15:05:29.490266 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:29.493183 kubelet[3315]: E1105 15:05:29.492523 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:30.489187 kubelet[3315]: E1105 15:05:30.489089 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:31.489646 kubelet[3315]: E1105 15:05:31.489542 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:31.491625 kubelet[3315]: E1105 15:05:31.490793 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:33.388775 systemd[1]: Started sshd@14-172.31.23.78:22-139.178.89.65:47640.service - OpenSSH per-connection server daemon (139.178.89.65:47640). Nov 5 15:05:33.491572 kubelet[3315]: E1105 15:05:33.491456 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:33.614839 sshd[5703]: Accepted publickey for core from 139.178.89.65 port 47640 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:33.617731 sshd-session[5703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:33.632351 systemd-logind[1950]: New session 15 of user core. Nov 5 15:05:33.641210 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:05:33.912335 sshd[5706]: Connection closed by 139.178.89.65 port 47640 Nov 5 15:05:33.913631 sshd-session[5703]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:33.922370 systemd[1]: sshd@14-172.31.23.78:22-139.178.89.65:47640.service: Deactivated successfully. Nov 5 15:05:33.927684 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:05:33.929977 systemd-logind[1950]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:05:33.933924 systemd-logind[1950]: Removed session 15. Nov 5 15:05:38.953336 systemd[1]: Started sshd@15-172.31.23.78:22-139.178.89.65:51316.service - OpenSSH per-connection server daemon (139.178.89.65:51316). Nov 5 15:05:39.169120 sshd[5725]: Accepted publickey for core from 139.178.89.65 port 51316 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:39.172256 sshd-session[5725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:39.184976 systemd-logind[1950]: New session 16 of user core. Nov 5 15:05:39.194185 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:05:39.490521 sshd[5728]: Connection closed by 139.178.89.65 port 51316 Nov 5 15:05:39.491186 sshd-session[5725]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:39.504040 containerd[1997]: time="2025-11-05T15:05:39.502537107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:39.512593 systemd[1]: sshd@15-172.31.23.78:22-139.178.89.65:51316.service: Deactivated successfully. Nov 5 15:05:39.521353 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:05:39.529949 systemd-logind[1950]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:05:39.533205 systemd-logind[1950]: Removed session 16. Nov 5 15:05:39.821807 containerd[1997]: time="2025-11-05T15:05:39.821349521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:39.823626 containerd[1997]: time="2025-11-05T15:05:39.823554497Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:39.824054 containerd[1997]: time="2025-11-05T15:05:39.823590365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:39.824398 kubelet[3315]: E1105 15:05:39.823859 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:39.825630 kubelet[3315]: E1105 15:05:39.824345 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:39.825630 kubelet[3315]: E1105 15:05:39.825131 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr7wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:39.826457 kubelet[3315]: E1105 15:05:39.826318 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:43.497793 containerd[1997]: time="2025-11-05T15:05:43.496373179Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:05:43.819957 containerd[1997]: time="2025-11-05T15:05:43.819599025Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:43.821917 containerd[1997]: time="2025-11-05T15:05:43.821831889Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:05:43.822016 containerd[1997]: time="2025-11-05T15:05:43.821873673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:05:43.822213 kubelet[3315]: E1105 15:05:43.822158 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:43.823062 kubelet[3315]: E1105 15:05:43.822217 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:05:43.823062 kubelet[3315]: E1105 15:05:43.822543 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:43.824132 containerd[1997]: time="2025-11-05T15:05:43.823691745Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:05:44.157261 containerd[1997]: time="2025-11-05T15:05:44.157143666Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:44.164447 containerd[1997]: time="2025-11-05T15:05:44.163819830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:44.164730 containerd[1997]: time="2025-11-05T15:05:44.164360959Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:05:44.165013 kubelet[3315]: E1105 15:05:44.164796 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:44.165013 kubelet[3315]: E1105 15:05:44.164928 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:05:44.166978 containerd[1997]: time="2025-11-05T15:05:44.165351547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:44.167151 kubelet[3315]: E1105 15:05:44.165516 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dnpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:44.167151 kubelet[3315]: E1105 15:05:44.167030 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:44.532232 systemd[1]: Started sshd@16-172.31.23.78:22-139.178.89.65:51326.service - OpenSSH per-connection server daemon (139.178.89.65:51326). Nov 5 15:05:44.640609 containerd[1997]: time="2025-11-05T15:05:44.640527045Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:44.642816 containerd[1997]: time="2025-11-05T15:05:44.642733509Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:44.642945 containerd[1997]: time="2025-11-05T15:05:44.642861213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:44.643236 kubelet[3315]: E1105 15:05:44.643141 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:44.643968 kubelet[3315]: E1105 15:05:44.643237 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:44.643968 kubelet[3315]: E1105 15:05:44.643544 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7czzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:44.644817 containerd[1997]: time="2025-11-05T15:05:44.643861377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:05:44.645510 kubelet[3315]: E1105 15:05:44.645431 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:44.733022 sshd[5746]: Accepted publickey for core from 139.178.89.65 port 51326 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:44.735726 sshd-session[5746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:44.744671 systemd-logind[1950]: New session 17 of user core. Nov 5 15:05:44.751145 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:05:45.005791 sshd[5749]: Connection closed by 139.178.89.65 port 51326 Nov 5 15:05:45.006683 sshd-session[5746]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:45.013976 systemd[1]: sshd@16-172.31.23.78:22-139.178.89.65:51326.service: Deactivated successfully. Nov 5 15:05:45.018339 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:05:45.020799 systemd-logind[1950]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:05:45.025437 systemd-logind[1950]: Removed session 17. Nov 5 15:05:45.046199 systemd[1]: Started sshd@17-172.31.23.78:22-139.178.89.65:51330.service - OpenSSH per-connection server daemon (139.178.89.65:51330). Nov 5 15:05:45.080271 containerd[1997]: time="2025-11-05T15:05:45.080076511Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:45.082413 containerd[1997]: time="2025-11-05T15:05:45.082318123Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:05:45.084273 containerd[1997]: time="2025-11-05T15:05:45.082455847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:05:45.084273 containerd[1997]: time="2025-11-05T15:05:45.083502235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:05:45.084421 kubelet[3315]: E1105 15:05:45.082874 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:45.084421 kubelet[3315]: E1105 15:05:45.083043 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:05:45.084421 kubelet[3315]: E1105 15:05:45.083469 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7ddee92fa141e6860da0e5d6f43cfe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:45.255281 sshd[5761]: Accepted publickey for core from 139.178.89.65 port 51330 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:45.257914 sshd-session[5761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:45.266974 systemd-logind[1950]: New session 18 of user core. Nov 5 15:05:45.280169 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:05:45.448919 containerd[1997]: time="2025-11-05T15:05:45.448771497Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:45.451071 containerd[1997]: time="2025-11-05T15:05:45.450986829Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:05:45.451239 containerd[1997]: time="2025-11-05T15:05:45.451132881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:05:45.452200 kubelet[3315]: E1105 15:05:45.452130 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:45.452378 kubelet[3315]: E1105 15:05:45.452227 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:05:45.453579 kubelet[3315]: E1105 15:05:45.453145 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:45.455005 containerd[1997]: time="2025-11-05T15:05:45.453997437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:05:45.456746 kubelet[3315]: E1105 15:05:45.456012 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:45.773036 sshd[5764]: Connection closed by 139.178.89.65 port 51330 Nov 5 15:05:45.772872 sshd-session[5761]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:45.778747 containerd[1997]: time="2025-11-05T15:05:45.778595807Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:45.782268 containerd[1997]: time="2025-11-05T15:05:45.782035031Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:45.782268 containerd[1997]: time="2025-11-05T15:05:45.782120759Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:05:45.783339 kubelet[3315]: E1105 15:05:45.783127 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:45.783339 kubelet[3315]: E1105 15:05:45.783332 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:05:45.783816 kubelet[3315]: E1105 15:05:45.783676 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnp2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:45.784283 systemd[1]: sshd@17-172.31.23.78:22-139.178.89.65:51330.service: Deactivated successfully. Nov 5 15:05:45.789670 containerd[1997]: time="2025-11-05T15:05:45.784597343Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:05:45.790865 kubelet[3315]: E1105 15:05:45.785179 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:45.795218 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:05:45.802229 systemd-logind[1950]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:05:45.819689 systemd[1]: Started sshd@18-172.31.23.78:22-139.178.89.65:51336.service - OpenSSH per-connection server daemon (139.178.89.65:51336). Nov 5 15:05:45.822209 systemd-logind[1950]: Removed session 18. Nov 5 15:05:46.023500 sshd[5774]: Accepted publickey for core from 139.178.89.65 port 51336 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:46.026526 sshd-session[5774]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:46.036971 systemd-logind[1950]: New session 19 of user core. Nov 5 15:05:46.045256 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:05:46.351714 containerd[1997]: time="2025-11-05T15:05:46.351164409Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:46.354575 containerd[1997]: time="2025-11-05T15:05:46.354398865Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:05:46.354575 containerd[1997]: time="2025-11-05T15:05:46.354531333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:05:46.355075 kubelet[3315]: E1105 15:05:46.355014 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:46.356673 kubelet[3315]: E1105 15:05:46.355408 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:05:46.356673 kubelet[3315]: E1105 15:05:46.355744 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:46.358154 containerd[1997]: time="2025-11-05T15:05:46.356550669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:05:46.359178 kubelet[3315]: E1105 15:05:46.357641 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:46.651235 containerd[1997]: time="2025-11-05T15:05:46.651081791Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:05:46.653736 containerd[1997]: time="2025-11-05T15:05:46.653662715Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:05:46.654067 containerd[1997]: time="2025-11-05T15:05:46.653798363Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:05:46.655171 kubelet[3315]: E1105 15:05:46.654299 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:46.655171 kubelet[3315]: E1105 15:05:46.654359 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:05:46.655171 kubelet[3315]: E1105 15:05:46.654539 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49sgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:05:46.656384 kubelet[3315]: E1105 15:05:46.656319 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:47.277924 sshd[5779]: Connection closed by 139.178.89.65 port 51336 Nov 5 15:05:47.276788 sshd-session[5774]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:47.288576 systemd[1]: sshd@18-172.31.23.78:22-139.178.89.65:51336.service: Deactivated successfully. Nov 5 15:05:47.299120 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:05:47.303855 systemd-logind[1950]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:05:47.326119 systemd[1]: Started sshd@19-172.31.23.78:22-139.178.89.65:47190.service - OpenSSH per-connection server daemon (139.178.89.65:47190). Nov 5 15:05:47.330628 systemd-logind[1950]: Removed session 19. Nov 5 15:05:47.529602 sshd[5816]: Accepted publickey for core from 139.178.89.65 port 47190 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:47.532102 sshd-session[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:47.542001 systemd-logind[1950]: New session 20 of user core. Nov 5 15:05:47.546292 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:05:48.102823 sshd[5819]: Connection closed by 139.178.89.65 port 47190 Nov 5 15:05:48.103589 sshd-session[5816]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:48.114332 systemd-logind[1950]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:05:48.116174 systemd[1]: sshd@19-172.31.23.78:22-139.178.89.65:47190.service: Deactivated successfully. Nov 5 15:05:48.121548 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:05:48.139543 systemd-logind[1950]: Removed session 20. Nov 5 15:05:48.143335 systemd[1]: Started sshd@20-172.31.23.78:22-139.178.89.65:47192.service - OpenSSH per-connection server daemon (139.178.89.65:47192). Nov 5 15:05:48.336791 sshd[5828]: Accepted publickey for core from 139.178.89.65 port 47192 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:48.339364 sshd-session[5828]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:48.350288 systemd-logind[1950]: New session 21 of user core. Nov 5 15:05:48.355202 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:05:48.597565 sshd[5831]: Connection closed by 139.178.89.65 port 47192 Nov 5 15:05:48.597445 sshd-session[5828]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:48.604735 systemd[1]: sshd@20-172.31.23.78:22-139.178.89.65:47192.service: Deactivated successfully. Nov 5 15:05:48.609505 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:05:48.612832 systemd-logind[1950]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:05:48.617741 systemd-logind[1950]: Removed session 21. Nov 5 15:05:51.488085 kubelet[3315]: E1105 15:05:51.488002 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:05:53.633771 systemd[1]: Started sshd@21-172.31.23.78:22-139.178.89.65:47200.service - OpenSSH per-connection server daemon (139.178.89.65:47200). Nov 5 15:05:53.834450 sshd[5843]: Accepted publickey for core from 139.178.89.65 port 47200 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:53.837712 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:53.845561 systemd-logind[1950]: New session 22 of user core. Nov 5 15:05:53.855150 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:05:54.114982 sshd[5846]: Connection closed by 139.178.89.65 port 47200 Nov 5 15:05:54.116053 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:54.124743 systemd[1]: sshd@21-172.31.23.78:22-139.178.89.65:47200.service: Deactivated successfully. Nov 5 15:05:54.129506 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:05:54.132105 systemd-logind[1950]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:05:54.135991 systemd-logind[1950]: Removed session 22. Nov 5 15:05:55.488831 kubelet[3315]: E1105 15:05:55.488755 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:05:56.490513 kubelet[3315]: E1105 15:05:56.489608 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:05:57.489379 kubelet[3315]: E1105 15:05:57.489278 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:05:57.494129 kubelet[3315]: E1105 15:05:57.493968 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:05:58.490109 kubelet[3315]: E1105 15:05:58.490022 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:05:58.492923 kubelet[3315]: E1105 15:05:58.492706 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:05:59.139217 containerd[1997]: time="2025-11-05T15:05:59.139132149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" id:\"9d009e19196af5b201936f9d93d17d7516b1a523f0fb15fb3d33672493551f1b\" pid:5871 exited_at:{seconds:1762355159 nanos:138198453}" Nov 5 15:05:59.164536 systemd[1]: Started sshd@22-172.31.23.78:22-139.178.89.65:58782.service - OpenSSH per-connection server daemon (139.178.89.65:58782). Nov 5 15:05:59.383946 sshd[5883]: Accepted publickey for core from 139.178.89.65 port 58782 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:05:59.387322 sshd-session[5883]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:05:59.400364 systemd-logind[1950]: New session 23 of user core. Nov 5 15:05:59.403699 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:05:59.707659 sshd[5886]: Connection closed by 139.178.89.65 port 58782 Nov 5 15:05:59.708725 sshd-session[5883]: pam_unix(sshd:session): session closed for user core Nov 5 15:05:59.719029 systemd[1]: sshd@22-172.31.23.78:22-139.178.89.65:58782.service: Deactivated successfully. Nov 5 15:05:59.723489 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:05:59.732006 systemd-logind[1950]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:05:59.741479 systemd-logind[1950]: Removed session 23. Nov 5 15:06:04.489540 kubelet[3315]: E1105 15:06:04.489467 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:06:04.788082 systemd[1]: Started sshd@23-172.31.23.78:22-139.178.89.65:58790.service - OpenSSH per-connection server daemon (139.178.89.65:58790). Nov 5 15:06:05.017508 sshd[5903]: Accepted publickey for core from 139.178.89.65 port 58790 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:06:05.021494 sshd-session[5903]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:05.032556 systemd-logind[1950]: New session 24 of user core. Nov 5 15:06:05.039570 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:06:05.317845 sshd[5906]: Connection closed by 139.178.89.65 port 58790 Nov 5 15:06:05.318696 sshd-session[5903]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:05.329193 systemd[1]: sshd@23-172.31.23.78:22-139.178.89.65:58790.service: Deactivated successfully. Nov 5 15:06:05.333591 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:06:05.338341 systemd-logind[1950]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:06:05.343689 systemd-logind[1950]: Removed session 24. Nov 5 15:06:06.489369 kubelet[3315]: E1105 15:06:06.489285 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:06:08.489336 kubelet[3315]: E1105 15:06:08.489218 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:06:09.493352 kubelet[3315]: E1105 15:06:09.492147 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:06:10.356338 systemd[1]: Started sshd@24-172.31.23.78:22-139.178.89.65:50770.service - OpenSSH per-connection server daemon (139.178.89.65:50770). Nov 5 15:06:10.489648 kubelet[3315]: E1105 15:06:10.489574 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:06:10.558674 sshd[5918]: Accepted publickey for core from 139.178.89.65 port 50770 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:06:10.565121 sshd-session[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:10.582282 systemd-logind[1950]: New session 25 of user core. Nov 5 15:06:10.588356 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:06:10.900925 sshd[5921]: Connection closed by 139.178.89.65 port 50770 Nov 5 15:06:10.901923 sshd-session[5918]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:10.913595 systemd-logind[1950]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:06:10.915414 systemd[1]: sshd@24-172.31.23.78:22-139.178.89.65:50770.service: Deactivated successfully. Nov 5 15:06:10.921853 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:06:10.929281 systemd-logind[1950]: Removed session 25. Nov 5 15:06:11.492766 kubelet[3315]: E1105 15:06:11.492592 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:06:12.488796 kubelet[3315]: E1105 15:06:12.488719 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:06:15.949931 systemd[1]: Started sshd@25-172.31.23.78:22-139.178.89.65:50786.service - OpenSSH per-connection server daemon (139.178.89.65:50786). Nov 5 15:06:16.172388 sshd[5933]: Accepted publickey for core from 139.178.89.65 port 50786 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:06:16.175288 sshd-session[5933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:16.184994 systemd-logind[1950]: New session 26 of user core. Nov 5 15:06:16.192216 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:06:16.520937 sshd[5936]: Connection closed by 139.178.89.65 port 50786 Nov 5 15:06:16.520734 sshd-session[5933]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:16.532192 systemd[1]: sshd@25-172.31.23.78:22-139.178.89.65:50786.service: Deactivated successfully. Nov 5 15:06:16.541378 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:06:16.543308 systemd-logind[1950]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:06:16.550613 systemd-logind[1950]: Removed session 26. Nov 5 15:06:18.489214 kubelet[3315]: E1105 15:06:18.487844 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:06:18.490341 kubelet[3315]: E1105 15:06:18.490271 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:06:20.488863 kubelet[3315]: E1105 15:06:20.488757 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:06:21.490511 kubelet[3315]: E1105 15:06:21.490410 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:06:21.557389 systemd[1]: Started sshd@26-172.31.23.78:22-139.178.89.65:46202.service - OpenSSH per-connection server daemon (139.178.89.65:46202). Nov 5 15:06:21.773118 sshd[5950]: Accepted publickey for core from 139.178.89.65 port 46202 ssh2: RSA SHA256:AXdl0qxEckaI43Z7wHyF3i9fg3UyK1i6tXgWUp7EPFc Nov 5 15:06:21.781497 sshd-session[5950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:06:21.795994 systemd-logind[1950]: New session 27 of user core. Nov 5 15:06:21.807226 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:06:22.111206 sshd[5953]: Connection closed by 139.178.89.65 port 46202 Nov 5 15:06:22.109705 sshd-session[5950]: pam_unix(sshd:session): session closed for user core Nov 5 15:06:22.118102 systemd[1]: sshd@26-172.31.23.78:22-139.178.89.65:46202.service: Deactivated successfully. Nov 5 15:06:22.125066 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:06:22.129470 systemd-logind[1950]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:06:22.132253 systemd-logind[1950]: Removed session 27. Nov 5 15:06:24.489412 kubelet[3315]: E1105 15:06:24.489284 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:06:25.488493 containerd[1997]: time="2025-11-05T15:06:25.488431620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Nov 5 15:06:25.799252 containerd[1997]: time="2025-11-05T15:06:25.799088161Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:25.801384 containerd[1997]: time="2025-11-05T15:06:25.801285433Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Nov 5 15:06:25.801533 containerd[1997]: time="2025-11-05T15:06:25.801357949Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Nov 5 15:06:25.801836 kubelet[3315]: E1105 15:06:25.801754 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:06:25.801836 kubelet[3315]: E1105 15:06:25.801824 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Nov 5 15:06:25.802640 kubelet[3315]: E1105 15:06:25.802064 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:25.805488 containerd[1997]: time="2025-11-05T15:06:25.805151665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Nov 5 15:06:26.062967 containerd[1997]: time="2025-11-05T15:06:26.062785151Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:26.066087 containerd[1997]: time="2025-11-05T15:06:26.065938487Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Nov 5 15:06:26.066087 containerd[1997]: time="2025-11-05T15:06:26.066002111Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Nov 5 15:06:26.066600 kubelet[3315]: E1105 15:06:26.066503 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:06:26.066600 kubelet[3315]: E1105 15:06:26.066568 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Nov 5 15:06:26.067000 kubelet[3315]: E1105 15:06:26.066930 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mt6w6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbscs_calico-system(80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:26.068437 kubelet[3315]: E1105 15:06:26.068368 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:06:26.487055 kubelet[3315]: E1105 15:06:26.486870 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:06:29.078257 containerd[1997]: time="2025-11-05T15:06:29.078167438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"016bd2fa2504481ef5d4bdbf0d8cda8fe521339c82de5cc275275617a29ebf53\" id:\"07d19b00d99b112542fe973d079766f8db44e3580cd60876e304d07f76d14035\" pid:5984 exited_at:{seconds:1762355189 nanos:77333918}" Nov 5 15:06:31.489537 containerd[1997]: time="2025-11-05T15:06:31.488664402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:06:31.785183 containerd[1997]: time="2025-11-05T15:06:31.785031931Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:31.787579 containerd[1997]: time="2025-11-05T15:06:31.787493839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:06:31.787726 containerd[1997]: time="2025-11-05T15:06:31.787632259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:06:31.788038 kubelet[3315]: E1105 15:06:31.787973 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:31.789399 kubelet[3315]: E1105 15:06:31.788048 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:31.789399 kubelet[3315]: E1105 15:06:31.788725 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cr7wf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-qdk95_calico-apiserver(bd295bb4-9ab3-4f09-8d18-d7e16c0d217c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:31.789624 containerd[1997]: time="2025-11-05T15:06:31.788514847Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:06:31.790433 kubelet[3315]: E1105 15:06:31.790375 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:06:32.070952 containerd[1997]: time="2025-11-05T15:06:32.070758676Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:32.073040 containerd[1997]: time="2025-11-05T15:06:32.072964180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:06:32.073147 containerd[1997]: time="2025-11-05T15:06:32.073099312Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:06:32.073474 kubelet[3315]: E1105 15:06:32.073299 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:32.073965 kubelet[3315]: E1105 15:06:32.073487 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:32.073965 kubelet[3315]: E1105 15:06:32.073806 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7czzv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-9456ddf4d-hxsgd_calico-apiserver(8a16765c-7214-405b-a3ab-1a750d3fae14): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:32.074236 containerd[1997]: time="2025-11-05T15:06:32.074197096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Nov 5 15:06:32.075806 kubelet[3315]: E1105 15:06:32.075708 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:06:32.366767 containerd[1997]: time="2025-11-05T15:06:32.366483474Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:32.368864 containerd[1997]: time="2025-11-05T15:06:32.368787414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Nov 5 15:06:32.368864 containerd[1997]: time="2025-11-05T15:06:32.368819574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Nov 5 15:06:32.369184 kubelet[3315]: E1105 15:06:32.369096 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:06:32.369258 kubelet[3315]: E1105 15:06:32.369205 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Nov 5 15:06:32.369606 kubelet[3315]: E1105 15:06:32.369473 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dnpk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78f86b5b57-hhskm_calico-system(88b7c103-d45c-4fa8-81a5-56483036338a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:32.371041 kubelet[3315]: E1105 15:06:32.370971 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:06:34.487581 containerd[1997]: time="2025-11-05T15:06:34.487507292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Nov 5 15:06:34.859593 containerd[1997]: time="2025-11-05T15:06:34.859419022Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:34.862179 containerd[1997]: time="2025-11-05T15:06:34.862045738Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Nov 5 15:06:34.862590 containerd[1997]: time="2025-11-05T15:06:34.862363006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Nov 5 15:06:34.863125 kubelet[3315]: E1105 15:06:34.862709 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:06:34.863125 kubelet[3315]: E1105 15:06:34.862775 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Nov 5 15:06:34.863125 kubelet[3315]: E1105 15:06:34.863023 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qnp2k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q7wlk_calico-system(ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:34.864324 kubelet[3315]: E1105 15:06:34.864248 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:06:35.947549 systemd[1]: cri-containerd-bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f.scope: Deactivated successfully. Nov 5 15:06:35.949161 systemd[1]: cri-containerd-bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f.scope: Consumed 33.451s CPU time, 101M memory peak. Nov 5 15:06:35.953900 containerd[1997]: time="2025-11-05T15:06:35.953830428Z" level=info msg="received exit event container_id:\"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" id:\"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" pid:3743 exit_status:1 exited_at:{seconds:1762355195 nanos:953370768}" Nov 5 15:06:35.955390 containerd[1997]: time="2025-11-05T15:06:35.955279704Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" id:\"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" pid:3743 exit_status:1 exited_at:{seconds:1762355195 nanos:953370768}" Nov 5 15:06:35.998829 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f-rootfs.mount: Deactivated successfully. Nov 5 15:06:36.372343 kubelet[3315]: I1105 15:06:36.371593 3315 scope.go:117] "RemoveContainer" containerID="bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f" Nov 5 15:06:36.376778 containerd[1997]: time="2025-11-05T15:06:36.376711594Z" level=info msg="CreateContainer within sandbox \"ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Nov 5 15:06:36.396927 containerd[1997]: time="2025-11-05T15:06:36.394877050Z" level=info msg="Container 63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:36.406418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204313631.mount: Deactivated successfully. Nov 5 15:06:36.411192 containerd[1997]: time="2025-11-05T15:06:36.411120514Z" level=info msg="CreateContainer within sandbox \"ab661f5a84f78140b9df64066103e79bf9dbf8c9086c18f181df9322f7e5172f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\"" Nov 5 15:06:36.412035 containerd[1997]: time="2025-11-05T15:06:36.411974566Z" level=info msg="StartContainer for \"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\"" Nov 5 15:06:36.413857 containerd[1997]: time="2025-11-05T15:06:36.413783482Z" level=info msg="connecting to shim 63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e" address="unix:///run/containerd/s/eee357984149ba1d744ed8e93b8aa04ce7f372f175b7b74aa581763e09560802" protocol=ttrpc version=3 Nov 5 15:06:36.456194 systemd[1]: Started cri-containerd-63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e.scope - libcontainer container 63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e. Nov 5 15:06:36.491600 kubelet[3315]: E1105 15:06:36.491417 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:06:36.526967 containerd[1997]: time="2025-11-05T15:06:36.526501343Z" level=info msg="StartContainer for \"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\" returns successfully" Nov 5 15:06:36.545110 systemd[1]: cri-containerd-193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110.scope: Deactivated successfully. Nov 5 15:06:36.545790 systemd[1]: cri-containerd-193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110.scope: Consumed 4.709s CPU time, 60M memory peak. Nov 5 15:06:36.555068 containerd[1997]: time="2025-11-05T15:06:36.555005939Z" level=info msg="TaskExit event in podsandbox handler container_id:\"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\" id:\"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\" pid:3158 exit_status:1 exited_at:{seconds:1762355196 nanos:554384123}" Nov 5 15:06:36.556007 containerd[1997]: time="2025-11-05T15:06:36.555385031Z" level=info msg="received exit event container_id:\"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\" id:\"193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110\" pid:3158 exit_status:1 exited_at:{seconds:1762355196 nanos:554384123}" Nov 5 15:06:36.623229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110-rootfs.mount: Deactivated successfully. Nov 5 15:06:37.378639 kubelet[3315]: I1105 15:06:37.378564 3315 scope.go:117] "RemoveContainer" containerID="193011a918827a5a67788d2b37d5af2f511b14ea3aa0c646b07436e143ae5110" Nov 5 15:06:37.383742 containerd[1997]: time="2025-11-05T15:06:37.382883567Z" level=info msg="CreateContainer within sandbox \"4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Nov 5 15:06:37.401513 containerd[1997]: time="2025-11-05T15:06:37.399512375Z" level=info msg="Container 98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:37.419635 containerd[1997]: time="2025-11-05T15:06:37.419557451Z" level=info msg="CreateContainer within sandbox \"4d8133ee8796062924af695b436b2af2e2e19649b5584bffe1877dcfdecae9f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023\"" Nov 5 15:06:37.420668 containerd[1997]: time="2025-11-05T15:06:37.420606023Z" level=info msg="StartContainer for \"98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023\"" Nov 5 15:06:37.422841 containerd[1997]: time="2025-11-05T15:06:37.422773751Z" level=info msg="connecting to shim 98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023" address="unix:///run/containerd/s/103b719becc3f9f50e92dca0193c7fc911957fd31522fa69dc7ccf27f0ef8b7f" protocol=ttrpc version=3 Nov 5 15:06:37.463535 systemd[1]: Started cri-containerd-98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023.scope - libcontainer container 98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023. Nov 5 15:06:37.564737 containerd[1997]: time="2025-11-05T15:06:37.564579672Z" level=info msg="StartContainer for \"98c15ed3a419a789c5835cd52bdc4a7e3f71f5c454383251ca7b4aa4543e2023\" returns successfully" Nov 5 15:06:39.491072 containerd[1997]: time="2025-11-05T15:06:39.489628381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Nov 5 15:06:39.862517 containerd[1997]: time="2025-11-05T15:06:39.862365219Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:39.864909 containerd[1997]: time="2025-11-05T15:06:39.864828147Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Nov 5 15:06:39.865035 containerd[1997]: time="2025-11-05T15:06:39.864983691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Nov 5 15:06:39.865416 kubelet[3315]: E1105 15:06:39.865309 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:39.867042 kubelet[3315]: E1105 15:06:39.865411 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Nov 5 15:06:39.867042 kubelet[3315]: E1105 15:06:39.866580 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-49sgl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-84dbb9fd44-dctgw_calico-apiserver(8b23d5a1-7fb9-4412-bcea-afb711fedf9c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:39.867313 containerd[1997]: time="2025-11-05T15:06:39.866098443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Nov 5 15:06:39.868222 kubelet[3315]: E1105 15:06:39.868168 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:06:40.266194 containerd[1997]: time="2025-11-05T15:06:40.266119117Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:40.268366 containerd[1997]: time="2025-11-05T15:06:40.268285213Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Nov 5 15:06:40.268456 containerd[1997]: time="2025-11-05T15:06:40.268426933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Nov 5 15:06:40.268782 kubelet[3315]: E1105 15:06:40.268707 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:06:40.268871 kubelet[3315]: E1105 15:06:40.268800 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Nov 5 15:06:40.269208 kubelet[3315]: E1105 15:06:40.269105 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:ed7ddee92fa141e6860da0e5d6f43cfe,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:40.272009 containerd[1997]: time="2025-11-05T15:06:40.271923277Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Nov 5 15:06:40.730650 containerd[1997]: time="2025-11-05T15:06:40.730442547Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Nov 5 15:06:40.732708 containerd[1997]: time="2025-11-05T15:06:40.732571599Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Nov 5 15:06:40.732708 containerd[1997]: time="2025-11-05T15:06:40.732646791Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Nov 5 15:06:40.733404 kubelet[3315]: E1105 15:06:40.733072 3315 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:06:40.733404 kubelet[3315]: E1105 15:06:40.733153 3315 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Nov 5 15:06:40.733404 kubelet[3315]: E1105 15:06:40.733318 3315 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c8rkz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-79d458847d-vcdwj_calico-system(7c3e0183-e5b9-4364-be32-8caba037f1e7): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Nov 5 15:06:40.734630 kubelet[3315]: E1105 15:06:40.734546 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7" Nov 5 15:06:41.513328 systemd[1]: cri-containerd-aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907.scope: Deactivated successfully. Nov 5 15:06:41.513859 systemd[1]: cri-containerd-aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907.scope: Consumed 6.206s CPU time, 21M memory peak. Nov 5 15:06:41.520190 containerd[1997]: time="2025-11-05T15:06:41.519822843Z" level=info msg="received exit event container_id:\"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\" id:\"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\" pid:3149 exit_status:1 exited_at:{seconds:1762355201 nanos:519380895}" Nov 5 15:06:41.520605 containerd[1997]: time="2025-11-05T15:06:41.520477755Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\" id:\"aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907\" pid:3149 exit_status:1 exited_at:{seconds:1762355201 nanos:519380895}" Nov 5 15:06:41.563055 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907-rootfs.mount: Deactivated successfully. Nov 5 15:06:42.407699 kubelet[3315]: I1105 15:06:42.407651 3315 scope.go:117] "RemoveContainer" containerID="aeb56f75c0ef37ea2284ba2c790eaeda6a0b20bd8a2dcb07eb7debf40b70c907" Nov 5 15:06:42.411725 containerd[1997]: time="2025-11-05T15:06:42.411678676Z" level=info msg="CreateContainer within sandbox \"5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Nov 5 15:06:42.429979 containerd[1997]: time="2025-11-05T15:06:42.429909064Z" level=info msg="Container 2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:06:42.450345 containerd[1997]: time="2025-11-05T15:06:42.450267520Z" level=info msg="CreateContainer within sandbox \"5d48e94e50c437e9e54b325a1223f76462f89abd3dc2d6f0e2cef50553fd43af\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca\"" Nov 5 15:06:42.451434 containerd[1997]: time="2025-11-05T15:06:42.451384540Z" level=info msg="StartContainer for \"2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca\"" Nov 5 15:06:42.453588 containerd[1997]: time="2025-11-05T15:06:42.453519484Z" level=info msg="connecting to shim 2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca" address="unix:///run/containerd/s/b3106deb3a800ba1e9662e941f3f73aec7e98579dc994babc2f64f64f6506dac" protocol=ttrpc version=3 Nov 5 15:06:42.495700 systemd[1]: Started cri-containerd-2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca.scope - libcontainer container 2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca. Nov 5 15:06:42.579746 containerd[1997]: time="2025-11-05T15:06:42.579612485Z" level=info msg="StartContainer for \"2ad8b45048b6748840cccf1ffac0c521069503ee5689331574ee8335e18ca6ca\" returns successfully" Nov 5 15:06:42.600972 kubelet[3315]: E1105 15:06:42.600854 3315 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-78?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 15:06:44.486751 kubelet[3315]: E1105 15:06:44.486624 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78f86b5b57-hhskm" podUID="88b7c103-d45c-4fa8-81a5-56483036338a" Nov 5 15:06:46.487636 kubelet[3315]: E1105 15:06:46.487563 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-qdk95" podUID="bd295bb4-9ab3-4f09-8d18-d7e16c0d217c" Nov 5 15:06:47.504217 kubelet[3315]: E1105 15:06:47.503949 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q7wlk" podUID="ceff6c23-cc8d-4d0d-a96c-00e2c04e9ec7" Nov 5 15:06:47.505247 kubelet[3315]: E1105 15:06:47.504435 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-9456ddf4d-hxsgd" podUID="8a16765c-7214-405b-a3ab-1a750d3fae14" Nov 5 15:06:48.038094 systemd[1]: cri-containerd-63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e.scope: Deactivated successfully. Nov 5 15:06:48.040356 containerd[1997]: time="2025-11-05T15:06:48.038690384Z" level=info msg="received exit event container_id:\"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\" id:\"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\" pid:6043 exit_status:1 exited_at:{seconds:1762355208 nanos:38391248}" Nov 5 15:06:48.040356 containerd[1997]: time="2025-11-05T15:06:48.039167384Z" level=info msg="TaskExit event in podsandbox handler container_id:\"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\" id:\"63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e\" pid:6043 exit_status:1 exited_at:{seconds:1762355208 nanos:38391248}" Nov 5 15:06:48.083157 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e-rootfs.mount: Deactivated successfully. Nov 5 15:06:48.437584 kubelet[3315]: I1105 15:06:48.437524 3315 scope.go:117] "RemoveContainer" containerID="bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f" Nov 5 15:06:48.438189 kubelet[3315]: I1105 15:06:48.438140 3315 scope.go:117] "RemoveContainer" containerID="63028379d823617a9c47d1d0fb61e7fd9aa6234526f1f4fd09838b596bd1017e" Nov 5 15:06:48.439431 kubelet[3315]: E1105 15:06:48.439337 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-rcmtk_tigera-operator(e7603134-5dac-4f8b-837d-99eabd361f43)\"" pod="tigera-operator/tigera-operator-7dcd859c48-rcmtk" podUID="e7603134-5dac-4f8b-837d-99eabd361f43" Nov 5 15:06:48.441487 containerd[1997]: time="2025-11-05T15:06:48.441430186Z" level=info msg="RemoveContainer for \"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\"" Nov 5 15:06:48.450185 containerd[1997]: time="2025-11-05T15:06:48.450091450Z" level=info msg="RemoveContainer for \"bbb21aa4d661408ff1e01af555343338ee286e88314f3922a9e7272f7a216c3f\" returns successfully" Nov 5 15:06:48.487961 kubelet[3315]: E1105 15:06:48.487786 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-dbscs" podUID="80c765f3-c6de-4dd4-a2b4-f4fc2fe8a572" Nov 5 15:06:52.601517 kubelet[3315]: E1105 15:06:52.601432 3315 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.78:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-78?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Nov 5 15:06:53.486909 kubelet[3315]: E1105 15:06:53.486707 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-84dbb9fd44-dctgw" podUID="8b23d5a1-7fb9-4412-bcea-afb711fedf9c" Nov 5 15:06:56.487761 kubelet[3315]: E1105 15:06:56.487689 3315 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-79d458847d-vcdwj" podUID="7c3e0183-e5b9-4364-be32-8caba037f1e7"