Jan 17 00:01:21.243417 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 17 00:01:21.243463 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 17 00:01:21.243488 kernel: KASLR disabled due to lack of seed Jan 17 00:01:21.243504 kernel: efi: EFI v2.7 by EDK II Jan 17 00:01:21.243521 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 17 00:01:21.243536 kernel: ACPI: Early table checksum verification disabled Jan 17 00:01:21.243555 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 17 00:01:21.243570 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 17 00:01:21.243586 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 17 00:01:21.243602 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 17 00:01:21.243623 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 17 00:01:21.243638 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 17 00:01:21.243654 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 17 00:01:21.243670 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 17 00:01:21.243689 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 17 00:01:21.243709 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 17 00:01:21.243750 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 17 00:01:21.243773 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 17 00:01:21.243790 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 17 00:01:21.243807 kernel: printk: bootconsole [uart0] enabled Jan 17 00:01:21.243824 kernel: NUMA: Failed to initialise from firmware Jan 17 00:01:21.243841 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 00:01:21.243858 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 17 00:01:21.243874 kernel: Zone ranges: Jan 17 00:01:21.243890 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 17 00:01:21.243907 kernel: DMA32 empty Jan 17 00:01:21.243930 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 17 00:01:21.243947 kernel: Movable zone start for each node Jan 17 00:01:21.243963 kernel: Early memory node ranges Jan 17 00:01:21.243980 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 17 00:01:21.243996 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 17 00:01:21.244012 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 17 00:01:21.244029 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 17 00:01:21.244068 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 17 00:01:21.244085 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 17 00:01:21.244101 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 17 00:01:21.244118 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 17 00:01:21.244134 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 17 00:01:21.244157 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 17 00:01:21.244174 kernel: psci: probing for conduit method from ACPI. Jan 17 00:01:21.244197 kernel: psci: PSCIv1.0 detected in firmware. Jan 17 00:01:21.244216 kernel: psci: Using standard PSCI v0.2 function IDs Jan 17 00:01:21.244233 kernel: psci: Trusted OS migration not required Jan 17 00:01:21.244255 kernel: psci: SMC Calling Convention v1.1 Jan 17 00:01:21.244273 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 17 00:01:21.244291 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 17 00:01:21.244309 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 17 00:01:21.244326 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 17 00:01:21.244344 kernel: Detected PIPT I-cache on CPU0 Jan 17 00:01:21.244361 kernel: CPU features: detected: GIC system register CPU interface Jan 17 00:01:21.244379 kernel: CPU features: detected: Spectre-v2 Jan 17 00:01:21.244396 kernel: CPU features: detected: Spectre-v3a Jan 17 00:01:21.244413 kernel: CPU features: detected: Spectre-BHB Jan 17 00:01:21.244431 kernel: CPU features: detected: ARM erratum 1742098 Jan 17 00:01:21.244452 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 17 00:01:21.244470 kernel: alternatives: applying boot alternatives Jan 17 00:01:21.244490 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:21.244508 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 00:01:21.244526 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 17 00:01:21.244543 kernel: Fallback order for Node 0: 0 Jan 17 00:01:21.244561 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 17 00:01:21.244578 kernel: Policy zone: Normal Jan 17 00:01:21.244595 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 00:01:21.244612 kernel: software IO TLB: area num 2. Jan 17 00:01:21.244630 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 17 00:01:21.244653 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 17 00:01:21.244671 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 00:01:21.244689 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 00:01:21.244707 kernel: rcu: RCU event tracing is enabled. Jan 17 00:01:21.248773 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 00:01:21.248798 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 00:01:21.248816 kernel: Tracing variant of Tasks RCU enabled. Jan 17 00:01:21.248835 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 00:01:21.248853 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 00:01:21.248870 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 17 00:01:21.248888 kernel: GICv3: 96 SPIs implemented Jan 17 00:01:21.248914 kernel: GICv3: 0 Extended SPIs implemented Jan 17 00:01:21.248932 kernel: Root IRQ handler: gic_handle_irq Jan 17 00:01:21.248950 kernel: GICv3: GICv3 features: 16 PPIs Jan 17 00:01:21.248968 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 17 00:01:21.248985 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 17 00:01:21.249003 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 17 00:01:21.249022 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 17 00:01:21.249040 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 17 00:01:21.249057 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 17 00:01:21.249076 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 17 00:01:21.249093 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 00:01:21.249111 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 17 00:01:21.249133 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 17 00:01:21.249151 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 17 00:01:21.249169 kernel: Console: colour dummy device 80x25 Jan 17 00:01:21.249187 kernel: printk: console [tty1] enabled Jan 17 00:01:21.249206 kernel: ACPI: Core revision 20230628 Jan 17 00:01:21.249224 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 17 00:01:21.249243 kernel: pid_max: default: 32768 minimum: 301 Jan 17 00:01:21.249263 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 00:01:21.249282 kernel: landlock: Up and running. Jan 17 00:01:21.249306 kernel: SELinux: Initializing. Jan 17 00:01:21.249325 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:21.249343 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 17 00:01:21.249362 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:21.249381 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 00:01:21.249401 kernel: rcu: Hierarchical SRCU implementation. Jan 17 00:01:21.249420 kernel: rcu: Max phase no-delay instances is 400. Jan 17 00:01:21.249439 kernel: Platform MSI: ITS@0x10080000 domain created Jan 17 00:01:21.249458 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 17 00:01:21.249483 kernel: Remapping and enabling EFI services. Jan 17 00:01:21.249502 kernel: smp: Bringing up secondary CPUs ... Jan 17 00:01:21.249522 kernel: Detected PIPT I-cache on CPU1 Jan 17 00:01:21.249540 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 17 00:01:21.249561 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 17 00:01:21.249579 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 17 00:01:21.249598 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 00:01:21.249617 kernel: SMP: Total of 2 processors activated. Jan 17 00:01:21.249635 kernel: CPU features: detected: 32-bit EL0 Support Jan 17 00:01:21.249658 kernel: CPU features: detected: 32-bit EL1 Support Jan 17 00:01:21.249677 kernel: CPU features: detected: CRC32 instructions Jan 17 00:01:21.249695 kernel: CPU: All CPU(s) started at EL1 Jan 17 00:01:21.251521 kernel: alternatives: applying system-wide alternatives Jan 17 00:01:21.251566 kernel: devtmpfs: initialized Jan 17 00:01:21.251585 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 00:01:21.251605 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 00:01:21.251623 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 00:01:21.251642 kernel: SMBIOS 3.0.0 present. Jan 17 00:01:21.251666 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 17 00:01:21.251685 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 00:01:21.251704 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 17 00:01:21.251741 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 17 00:01:21.251764 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 17 00:01:21.251783 kernel: audit: initializing netlink subsys (disabled) Jan 17 00:01:21.251802 kernel: audit: type=2000 audit(0.288:1): state=initialized audit_enabled=0 res=1 Jan 17 00:01:21.251821 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 00:01:21.251846 kernel: cpuidle: using governor menu Jan 17 00:01:21.251864 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 17 00:01:21.251883 kernel: ASID allocator initialised with 65536 entries Jan 17 00:01:21.251901 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 00:01:21.251920 kernel: Serial: AMBA PL011 UART driver Jan 17 00:01:21.251938 kernel: Modules: 17488 pages in range for non-PLT usage Jan 17 00:01:21.251957 kernel: Modules: 509008 pages in range for PLT usage Jan 17 00:01:21.251975 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 00:01:21.251994 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 00:01:21.252017 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 17 00:01:21.252055 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 17 00:01:21.252077 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 00:01:21.252096 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 00:01:21.252115 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 17 00:01:21.252133 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 17 00:01:21.252151 kernel: ACPI: Added _OSI(Module Device) Jan 17 00:01:21.252170 kernel: ACPI: Added _OSI(Processor Device) Jan 17 00:01:21.252189 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 00:01:21.252213 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 00:01:21.252232 kernel: ACPI: Interpreter enabled Jan 17 00:01:21.252251 kernel: ACPI: Using GIC for interrupt routing Jan 17 00:01:21.252269 kernel: ACPI: MCFG table detected, 1 entries Jan 17 00:01:21.252287 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 17 00:01:21.252602 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 17 00:01:21.253239 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 17 00:01:21.253462 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 17 00:01:21.253675 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 17 00:01:21.253921 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 17 00:01:21.253949 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 17 00:01:21.253969 kernel: acpiphp: Slot [1] registered Jan 17 00:01:21.253987 kernel: acpiphp: Slot [2] registered Jan 17 00:01:21.254006 kernel: acpiphp: Slot [3] registered Jan 17 00:01:21.254025 kernel: acpiphp: Slot [4] registered Jan 17 00:01:21.254043 kernel: acpiphp: Slot [5] registered Jan 17 00:01:21.254068 kernel: acpiphp: Slot [6] registered Jan 17 00:01:21.254087 kernel: acpiphp: Slot [7] registered Jan 17 00:01:21.254106 kernel: acpiphp: Slot [8] registered Jan 17 00:01:21.254124 kernel: acpiphp: Slot [9] registered Jan 17 00:01:21.254142 kernel: acpiphp: Slot [10] registered Jan 17 00:01:21.254160 kernel: acpiphp: Slot [11] registered Jan 17 00:01:21.254179 kernel: acpiphp: Slot [12] registered Jan 17 00:01:21.254198 kernel: acpiphp: Slot [13] registered Jan 17 00:01:21.254216 kernel: acpiphp: Slot [14] registered Jan 17 00:01:21.254234 kernel: acpiphp: Slot [15] registered Jan 17 00:01:21.254258 kernel: acpiphp: Slot [16] registered Jan 17 00:01:21.254276 kernel: acpiphp: Slot [17] registered Jan 17 00:01:21.254294 kernel: acpiphp: Slot [18] registered Jan 17 00:01:21.254313 kernel: acpiphp: Slot [19] registered Jan 17 00:01:21.254331 kernel: acpiphp: Slot [20] registered Jan 17 00:01:21.254349 kernel: acpiphp: Slot [21] registered Jan 17 00:01:21.254368 kernel: acpiphp: Slot [22] registered Jan 17 00:01:21.254386 kernel: acpiphp: Slot [23] registered Jan 17 00:01:21.254404 kernel: acpiphp: Slot [24] registered Jan 17 00:01:21.254427 kernel: acpiphp: Slot [25] registered Jan 17 00:01:21.254446 kernel: acpiphp: Slot [26] registered Jan 17 00:01:21.254464 kernel: acpiphp: Slot [27] registered Jan 17 00:01:21.254483 kernel: acpiphp: Slot [28] registered Jan 17 00:01:21.254501 kernel: acpiphp: Slot [29] registered Jan 17 00:01:21.254520 kernel: acpiphp: Slot [30] registered Jan 17 00:01:21.254538 kernel: acpiphp: Slot [31] registered Jan 17 00:01:21.254556 kernel: PCI host bridge to bus 0000:00 Jan 17 00:01:21.255662 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 17 00:01:21.255931 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 17 00:01:21.256145 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 17 00:01:21.256329 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 17 00:01:21.256594 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 17 00:01:21.256902 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 17 00:01:21.257122 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 17 00:01:21.257366 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 17 00:01:21.257576 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 17 00:01:21.257866 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 00:01:21.258087 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 17 00:01:21.258291 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 17 00:01:21.258499 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 17 00:01:21.258703 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 17 00:01:21.259151 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 17 00:01:21.259350 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 17 00:01:21.259591 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 17 00:01:21.259855 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 17 00:01:21.259883 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 17 00:01:21.259902 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 17 00:01:21.259922 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 17 00:01:21.259941 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 17 00:01:21.259969 kernel: iommu: Default domain type: Translated Jan 17 00:01:21.259988 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 17 00:01:21.260007 kernel: efivars: Registered efivars operations Jan 17 00:01:21.260025 kernel: vgaarb: loaded Jan 17 00:01:21.260063 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 17 00:01:21.260084 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 00:01:21.260103 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 00:01:21.260121 kernel: pnp: PnP ACPI init Jan 17 00:01:21.260348 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 17 00:01:21.260382 kernel: pnp: PnP ACPI: found 1 devices Jan 17 00:01:21.260401 kernel: NET: Registered PF_INET protocol family Jan 17 00:01:21.260420 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 00:01:21.260439 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 17 00:01:21.260458 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 00:01:21.260477 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 17 00:01:21.260496 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 17 00:01:21.260515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 17 00:01:21.260539 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:21.260558 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 17 00:01:21.260577 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 00:01:21.260595 kernel: PCI: CLS 0 bytes, default 64 Jan 17 00:01:21.260613 kernel: kvm [1]: HYP mode not available Jan 17 00:01:21.260632 kernel: Initialise system trusted keyrings Jan 17 00:01:21.260650 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 17 00:01:21.260669 kernel: Key type asymmetric registered Jan 17 00:01:21.260687 kernel: Asymmetric key parser 'x509' registered Jan 17 00:01:21.260711 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 17 00:01:21.260750 kernel: io scheduler mq-deadline registered Jan 17 00:01:21.260771 kernel: io scheduler kyber registered Jan 17 00:01:21.260789 kernel: io scheduler bfq registered Jan 17 00:01:21.261000 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 17 00:01:21.261027 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 17 00:01:21.261047 kernel: ACPI: button: Power Button [PWRB] Jan 17 00:01:21.261066 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 17 00:01:21.261084 kernel: ACPI: button: Sleep Button [SLPB] Jan 17 00:01:21.261109 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 00:01:21.261129 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 17 00:01:21.261334 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 17 00:01:21.261360 kernel: printk: console [ttyS0] disabled Jan 17 00:01:21.261380 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 17 00:01:21.261399 kernel: printk: console [ttyS0] enabled Jan 17 00:01:21.261417 kernel: printk: bootconsole [uart0] disabled Jan 17 00:01:21.261436 kernel: thunder_xcv, ver 1.0 Jan 17 00:01:21.261454 kernel: thunder_bgx, ver 1.0 Jan 17 00:01:21.261479 kernel: nicpf, ver 1.0 Jan 17 00:01:21.261497 kernel: nicvf, ver 1.0 Jan 17 00:01:21.261704 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 17 00:01:21.261921 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-17T00:01:20 UTC (1768608080) Jan 17 00:01:21.261948 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 00:01:21.261968 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 17 00:01:21.261987 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 17 00:01:21.262005 kernel: watchdog: Hard watchdog permanently disabled Jan 17 00:01:21.262030 kernel: NET: Registered PF_INET6 protocol family Jan 17 00:01:21.262049 kernel: Segment Routing with IPv6 Jan 17 00:01:21.262068 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 00:01:21.262086 kernel: NET: Registered PF_PACKET protocol family Jan 17 00:01:21.262105 kernel: Key type dns_resolver registered Jan 17 00:01:21.262124 kernel: registered taskstats version 1 Jan 17 00:01:21.262142 kernel: Loading compiled-in X.509 certificates Jan 17 00:01:21.262162 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 17 00:01:21.262180 kernel: Key type .fscrypt registered Jan 17 00:01:21.262205 kernel: Key type fscrypt-provisioning registered Jan 17 00:01:21.262224 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 00:01:21.262242 kernel: ima: Allocated hash algorithm: sha1 Jan 17 00:01:21.262261 kernel: ima: No architecture policies found Jan 17 00:01:21.262280 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 17 00:01:21.262299 kernel: clk: Disabling unused clocks Jan 17 00:01:21.262318 kernel: Freeing unused kernel memory: 39424K Jan 17 00:01:21.262337 kernel: Run /init as init process Jan 17 00:01:21.262356 kernel: with arguments: Jan 17 00:01:21.262379 kernel: /init Jan 17 00:01:21.262398 kernel: with environment: Jan 17 00:01:21.262416 kernel: HOME=/ Jan 17 00:01:21.262435 kernel: TERM=linux Jan 17 00:01:21.262458 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:21.262482 systemd[1]: Detected virtualization amazon. Jan 17 00:01:21.262502 systemd[1]: Detected architecture arm64. Jan 17 00:01:21.262522 systemd[1]: Running in initrd. Jan 17 00:01:21.262547 systemd[1]: No hostname configured, using default hostname. Jan 17 00:01:21.262567 systemd[1]: Hostname set to . Jan 17 00:01:21.262588 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:21.262609 systemd[1]: Queued start job for default target initrd.target. Jan 17 00:01:21.262629 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:21.262649 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:21.262671 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 00:01:21.262692 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:21.263797 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 00:01:21.263838 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 00:01:21.263863 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 00:01:21.263884 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 00:01:21.263905 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:21.263926 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:21.263953 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:21.263974 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:21.263994 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:21.264014 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:21.264050 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:21.264076 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:21.264097 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 00:01:21.264118 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 00:01:21.264138 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:21.264165 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:21.264186 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:21.264206 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:21.264226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 00:01:21.264247 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:21.264267 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 00:01:21.264287 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 00:01:21.264307 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:21.264328 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:21.264353 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:21.264374 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:21.264394 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:21.264414 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 00:01:21.264479 systemd-journald[251]: Collecting audit messages is disabled. Jan 17 00:01:21.264530 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:21.264551 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 00:01:21.264610 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:21.264647 systemd-journald[251]: Journal started Jan 17 00:01:21.264687 systemd-journald[251]: Runtime Journal (/run/log/journal/ec28c53dbba5be2a8cc6173a4d3c423a) is 8.0M, max 75.3M, 67.3M free. Jan 17 00:01:21.227042 systemd-modules-load[252]: Inserted module 'overlay' Jan 17 00:01:21.274077 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:21.276405 kernel: Bridge firewalling registered Jan 17 00:01:21.273776 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 17 00:01:21.286033 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:21.297153 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:21.298073 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:21.298504 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:21.315998 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:21.322328 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:21.362872 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:21.369151 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:21.378913 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:21.391675 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:21.400274 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:21.417993 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 00:01:21.450126 dracut-cmdline[292]: dracut-dracut-053 Jan 17 00:01:21.457960 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 17 00:01:21.495548 systemd-resolved[286]: Positive Trust Anchors: Jan 17 00:01:21.495584 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:21.495646 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:21.624760 kernel: SCSI subsystem initialized Jan 17 00:01:21.634759 kernel: Loading iSCSI transport class v2.0-870. Jan 17 00:01:21.645757 kernel: iscsi: registered transport (tcp) Jan 17 00:01:21.668176 kernel: iscsi: registered transport (qla4xxx) Jan 17 00:01:21.668263 kernel: QLogic iSCSI HBA Driver Jan 17 00:01:21.747795 kernel: random: crng init done Jan 17 00:01:21.748429 systemd-resolved[286]: Defaulting to hostname 'linux'. Jan 17 00:01:21.752874 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:21.758069 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:21.782615 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:21.794096 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 00:01:21.829773 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 00:01:21.829852 kernel: device-mapper: uevent: version 1.0.3 Jan 17 00:01:21.829879 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 00:01:21.896771 kernel: raid6: neonx8 gen() 6670 MB/s Jan 17 00:01:21.913757 kernel: raid6: neonx4 gen() 6459 MB/s Jan 17 00:01:21.930756 kernel: raid6: neonx2 gen() 5392 MB/s Jan 17 00:01:21.947754 kernel: raid6: neonx1 gen() 3938 MB/s Jan 17 00:01:21.964755 kernel: raid6: int64x8 gen() 3798 MB/s Jan 17 00:01:21.981767 kernel: raid6: int64x4 gen() 3688 MB/s Jan 17 00:01:21.998758 kernel: raid6: int64x2 gen() 3561 MB/s Jan 17 00:01:22.016837 kernel: raid6: int64x1 gen() 2765 MB/s Jan 17 00:01:22.016891 kernel: raid6: using algorithm neonx8 gen() 6670 MB/s Jan 17 00:01:22.035835 kernel: raid6: .... xor() 4907 MB/s, rmw enabled Jan 17 00:01:22.035904 kernel: raid6: using neon recovery algorithm Jan 17 00:01:22.043759 kernel: xor: measuring software checksum speed Jan 17 00:01:22.046131 kernel: 8regs : 10272 MB/sec Jan 17 00:01:22.046164 kernel: 32regs : 11915 MB/sec Jan 17 00:01:22.047431 kernel: arm64_neon : 9485 MB/sec Jan 17 00:01:22.047464 kernel: xor: using function: 32regs (11915 MB/sec) Jan 17 00:01:22.131768 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 00:01:22.151394 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:22.163046 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:22.198952 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 17 00:01:22.207050 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:22.226247 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 00:01:22.252607 dracut-pre-trigger[484]: rd.md=0: removing MD RAID activation Jan 17 00:01:22.307544 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:22.321120 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:22.446040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:22.460315 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 00:01:22.509254 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:22.513739 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:22.518883 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:22.532224 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:22.544606 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 00:01:22.580317 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:22.649778 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 17 00:01:22.649853 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 17 00:01:22.657218 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 17 00:01:22.660340 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 17 00:01:22.656699 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:22.656956 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:22.660411 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:22.663041 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:22.697607 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:52:a2:08:15:4b Jan 17 00:01:22.664200 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:22.672260 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:22.695113 (udev-worker)[530]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:22.699280 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:22.737774 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:22.751362 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 00:01:22.757763 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 17 00:01:22.757832 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 17 00:01:22.772699 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 17 00:01:22.781496 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 17 00:01:22.781567 kernel: GPT:9289727 != 33554431 Jan 17 00:01:22.781593 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 17 00:01:22.784737 kernel: GPT:9289727 != 33554431 Jan 17 00:01:22.784812 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:01:22.784838 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:22.786553 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:22.895441 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (528) Jan 17 00:01:22.924754 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (536) Jan 17 00:01:22.925675 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 17 00:01:22.987910 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 17 00:01:23.034475 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:01:23.051203 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 17 00:01:23.054087 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 17 00:01:23.070058 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 00:01:23.086078 disk-uuid[663]: Primary Header is updated. Jan 17 00:01:23.086078 disk-uuid[663]: Secondary Entries is updated. Jan 17 00:01:23.086078 disk-uuid[663]: Secondary Header is updated. Jan 17 00:01:23.098760 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:23.109281 kernel: GPT:disk_guids don't match. Jan 17 00:01:23.109342 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 17 00:01:23.110359 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:23.123763 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:24.124790 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 17 00:01:24.126929 disk-uuid[664]: The operation has completed successfully. Jan 17 00:01:24.315199 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 00:01:24.317635 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 00:01:24.364061 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 00:01:24.386650 sh[1012]: Success Jan 17 00:01:24.412758 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 17 00:01:24.538792 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 00:01:24.547209 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 00:01:24.562056 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 00:01:24.595948 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 17 00:01:24.596010 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:24.598010 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 00:01:24.599517 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 00:01:24.600863 kernel: BTRFS info (device dm-0): using free space tree Jan 17 00:01:24.679753 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 17 00:01:24.701751 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 00:01:24.706215 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 00:01:24.714007 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 00:01:24.717834 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 00:01:24.768628 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:24.768705 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:24.770461 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:24.786823 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:24.807467 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 00:01:24.810666 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:24.821877 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 00:01:24.834557 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 00:01:24.914799 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:24.927076 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:24.992871 systemd-networkd[1204]: lo: Link UP Jan 17 00:01:24.992884 systemd-networkd[1204]: lo: Gained carrier Jan 17 00:01:24.999522 systemd-networkd[1204]: Enumeration completed Jan 17 00:01:24.999826 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:25.006684 systemd[1]: Reached target network.target - Network. Jan 17 00:01:25.011019 systemd-networkd[1204]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:25.011039 systemd-networkd[1204]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:25.021187 systemd-networkd[1204]: eth0: Link UP Jan 17 00:01:25.021371 systemd-networkd[1204]: eth0: Gained carrier Jan 17 00:01:25.021391 systemd-networkd[1204]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:25.042820 systemd-networkd[1204]: eth0: DHCPv4 address 172.31.30.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:01:25.282607 ignition[1143]: Ignition 2.19.0 Jan 17 00:01:25.282627 ignition[1143]: Stage: fetch-offline Jan 17 00:01:25.287710 ignition[1143]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:25.289800 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:25.290638 ignition[1143]: Ignition finished successfully Jan 17 00:01:25.293617 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:25.313075 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 00:01:25.340628 ignition[1214]: Ignition 2.19.0 Jan 17 00:01:25.340656 ignition[1214]: Stage: fetch Jan 17 00:01:25.341313 ignition[1214]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:25.341338 ignition[1214]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:25.341493 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:25.363957 ignition[1214]: PUT result: OK Jan 17 00:01:25.369222 ignition[1214]: parsed url from cmdline: "" Jan 17 00:01:25.369238 ignition[1214]: no config URL provided Jan 17 00:01:25.369253 ignition[1214]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 00:01:25.369279 ignition[1214]: no config at "/usr/lib/ignition/user.ign" Jan 17 00:01:25.369310 ignition[1214]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:25.374200 ignition[1214]: PUT result: OK Jan 17 00:01:25.374289 ignition[1214]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 17 00:01:25.379919 ignition[1214]: GET result: OK Jan 17 00:01:25.383926 ignition[1214]: parsing config with SHA512: b19c3d944ec78e2b991adf6402e568510aff9f2ea403420609394940fbfa8853db0a138a5901e902a8c2755534572a980243de1b37ae8b6072ad2cf9db7817f0 Jan 17 00:01:25.392913 unknown[1214]: fetched base config from "system" Jan 17 00:01:25.393176 unknown[1214]: fetched base config from "system" Jan 17 00:01:25.393989 ignition[1214]: fetch: fetch complete Jan 17 00:01:25.393191 unknown[1214]: fetched user config from "aws" Jan 17 00:01:25.394000 ignition[1214]: fetch: fetch passed Jan 17 00:01:25.394085 ignition[1214]: Ignition finished successfully Jan 17 00:01:25.408338 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 00:01:25.422049 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 00:01:25.448806 ignition[1220]: Ignition 2.19.0 Jan 17 00:01:25.448836 ignition[1220]: Stage: kargs Jan 17 00:01:25.450787 ignition[1220]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:25.450814 ignition[1220]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:25.452122 ignition[1220]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:25.455003 ignition[1220]: PUT result: OK Jan 17 00:01:25.464873 ignition[1220]: kargs: kargs passed Jan 17 00:01:25.464970 ignition[1220]: Ignition finished successfully Jan 17 00:01:25.474693 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 00:01:25.483188 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 00:01:25.510034 ignition[1226]: Ignition 2.19.0 Jan 17 00:01:25.510540 ignition[1226]: Stage: disks Jan 17 00:01:25.511262 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:25.511287 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:25.511476 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:25.520973 ignition[1226]: PUT result: OK Jan 17 00:01:25.526288 ignition[1226]: disks: disks passed Jan 17 00:01:25.526463 ignition[1226]: Ignition finished successfully Jan 17 00:01:25.532155 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 00:01:25.535533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:25.538230 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 00:01:25.541180 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:25.543705 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:25.548159 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:25.564518 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 00:01:25.618415 systemd-fsck[1234]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 17 00:01:25.626336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 00:01:25.637901 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 00:01:25.736762 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 17 00:01:25.737763 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 00:01:25.742230 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:25.762917 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:25.773939 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 00:01:25.777818 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 17 00:01:25.777893 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 00:01:25.777942 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:25.802275 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 00:01:25.811773 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1253) Jan 17 00:01:25.816569 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:25.816644 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:25.816672 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:25.819054 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 00:01:25.836828 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:25.839821 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:26.148459 initrd-setup-root[1277]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 00:01:26.173241 initrd-setup-root[1284]: cut: /sysroot/etc/group: No such file or directory Jan 17 00:01:26.182075 initrd-setup-root[1291]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 00:01:26.191749 initrd-setup-root[1298]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 00:01:26.570763 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:26.580943 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 00:01:26.584983 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 00:01:26.619840 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 00:01:26.624777 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:26.660821 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 00:01:26.670288 ignition[1366]: INFO : Ignition 2.19.0 Jan 17 00:01:26.670288 ignition[1366]: INFO : Stage: mount Jan 17 00:01:26.674340 ignition[1366]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:26.674340 ignition[1366]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:26.674340 ignition[1366]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:26.674340 ignition[1366]: INFO : PUT result: OK Jan 17 00:01:26.684755 ignition[1366]: INFO : mount: mount passed Jan 17 00:01:26.684755 ignition[1366]: INFO : Ignition finished successfully Jan 17 00:01:26.690698 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 00:01:26.699949 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 00:01:26.721036 systemd-networkd[1204]: eth0: Gained IPv6LL Jan 17 00:01:26.748892 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 00:01:26.773780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1378) Jan 17 00:01:26.777845 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 17 00:01:26.777918 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 17 00:01:26.777946 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 17 00:01:26.786770 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 17 00:01:26.788881 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 00:01:26.830272 ignition[1395]: INFO : Ignition 2.19.0 Jan 17 00:01:26.832435 ignition[1395]: INFO : Stage: files Jan 17 00:01:26.832435 ignition[1395]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:26.832435 ignition[1395]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:26.832435 ignition[1395]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:26.842298 ignition[1395]: INFO : PUT result: OK Jan 17 00:01:26.848057 ignition[1395]: DEBUG : files: compiled without relabeling support, skipping Jan 17 00:01:26.851283 ignition[1395]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 00:01:26.851283 ignition[1395]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 00:01:26.875172 ignition[1395]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 00:01:26.878876 ignition[1395]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 00:01:26.882526 unknown[1395]: wrote ssh authorized keys file for user: core Jan 17 00:01:26.885433 ignition[1395]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 00:01:26.889532 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:01:26.889532 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 17 00:01:26.983165 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 17 00:01:27.154519 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:27.161837 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 17 00:01:27.510782 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 17 00:01:27.921337 ignition[1395]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 17 00:01:27.921337 ignition[1395]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 00:01:27.929591 ignition[1395]: INFO : files: files passed Jan 17 00:01:27.929591 ignition[1395]: INFO : Ignition finished successfully Jan 17 00:01:27.960825 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 00:01:27.971156 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 00:01:27.978985 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 00:01:27.995531 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 00:01:27.995849 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 00:01:28.031921 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:28.031921 initrd-setup-root-after-ignition[1424]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:28.044242 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 00:01:28.041542 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:28.055007 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 00:01:28.067078 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 00:01:28.117180 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 00:01:28.117842 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 00:01:28.123162 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 00:01:28.127509 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 00:01:28.129964 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 00:01:28.144977 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 00:01:28.174684 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:28.191998 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 00:01:28.217899 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:28.220905 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:28.226644 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 00:01:28.233191 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 00:01:28.233618 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 00:01:28.241396 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 00:01:28.244058 systemd[1]: Stopped target basic.target - Basic System. Jan 17 00:01:28.250236 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 00:01:28.252990 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 00:01:28.255977 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 00:01:28.265906 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 00:01:28.268953 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 00:01:28.274061 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 00:01:28.276836 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 00:01:28.287300 systemd[1]: Stopped target swap.target - Swaps. Jan 17 00:01:28.290859 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 00:01:28.291109 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 00:01:28.296984 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:28.300108 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:28.312193 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 00:01:28.314570 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:28.318553 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 00:01:28.318899 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 00:01:28.328065 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 00:01:28.328542 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 00:01:28.338796 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 00:01:28.341039 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 00:01:28.353092 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 00:01:28.358299 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 00:01:28.367494 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 00:01:28.372126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:28.375253 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 00:01:28.375567 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 00:01:28.402173 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 00:01:28.404811 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 00:01:28.426236 ignition[1448]: INFO : Ignition 2.19.0 Jan 17 00:01:28.426236 ignition[1448]: INFO : Stage: umount Jan 17 00:01:28.432699 ignition[1448]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 00:01:28.432699 ignition[1448]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 17 00:01:28.432699 ignition[1448]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 17 00:01:28.450424 ignition[1448]: INFO : PUT result: OK Jan 17 00:01:28.442809 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 00:01:28.457567 ignition[1448]: INFO : umount: umount passed Jan 17 00:01:28.461935 ignition[1448]: INFO : Ignition finished successfully Jan 17 00:01:28.464562 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 00:01:28.467356 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 00:01:28.473789 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 00:01:28.474147 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 00:01:28.484149 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 00:01:28.484312 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 00:01:28.490900 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 00:01:28.491002 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 00:01:28.493299 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 00:01:28.493386 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 00:01:28.495633 systemd[1]: Stopped target network.target - Network. Jan 17 00:01:28.498074 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 00:01:28.498163 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 00:01:28.500794 systemd[1]: Stopped target paths.target - Path Units. Jan 17 00:01:28.502764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 00:01:28.507771 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:28.511119 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 00:01:28.513419 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 00:01:28.515669 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 00:01:28.515790 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 00:01:28.517980 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 00:01:28.518052 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 00:01:28.520580 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 00:01:28.520666 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 00:01:28.527581 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 00:01:28.527899 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 00:01:28.538484 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 00:01:28.540820 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 00:01:28.554259 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 00:01:28.556984 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 00:01:28.560994 systemd-networkd[1204]: eth0: DHCPv6 lease lost Jan 17 00:01:28.580715 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 00:01:28.585188 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 00:01:28.593891 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 00:01:28.594714 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 00:01:28.604385 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 00:01:28.604474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:28.623862 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 00:01:28.628687 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 00:01:28.629290 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 00:01:28.637419 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 00:01:28.637522 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:28.640047 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 00:01:28.640153 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:28.641484 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 00:01:28.642695 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:28.667432 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:28.693452 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 00:01:28.693891 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 00:01:28.698513 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 00:01:28.699277 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:28.707240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 00:01:28.707348 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:28.712690 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 00:01:28.712798 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:28.717477 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 00:01:28.717860 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 00:01:28.722759 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 00:01:28.722846 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 00:01:28.738423 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 00:01:28.738516 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 00:01:28.753966 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 00:01:28.756558 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 00:01:28.756677 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:28.759927 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 17 00:01:28.760033 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:28.763161 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 00:01:28.763239 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:28.766655 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 00:01:28.766750 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:28.814374 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 00:01:28.814783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 00:01:28.823365 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 00:01:28.836859 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 00:01:28.853323 systemd[1]: Switching root. Jan 17 00:01:28.898593 systemd-journald[251]: Journal stopped Jan 17 00:01:31.303158 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 17 00:01:31.303294 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 00:01:31.303339 kernel: SELinux: policy capability open_perms=1 Jan 17 00:01:31.303371 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 00:01:31.303412 kernel: SELinux: policy capability always_check_network=0 Jan 17 00:01:31.303442 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 00:01:31.303479 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 00:01:31.303509 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 00:01:31.303538 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 00:01:31.303568 kernel: audit: type=1403 audit(1768608089.289:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 00:01:31.303601 systemd[1]: Successfully loaded SELinux policy in 79.493ms. Jan 17 00:01:31.303646 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.228ms. Jan 17 00:01:31.303682 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 00:01:31.303716 systemd[1]: Detected virtualization amazon. Jan 17 00:01:31.304286 systemd[1]: Detected architecture arm64. Jan 17 00:01:31.304330 systemd[1]: Detected first boot. Jan 17 00:01:31.304366 systemd[1]: Initializing machine ID from VM UUID. Jan 17 00:01:31.304403 zram_generator::config[1490]: No configuration found. Jan 17 00:01:31.304440 systemd[1]: Populated /etc with preset unit settings. Jan 17 00:01:31.304474 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 17 00:01:31.304505 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 17 00:01:31.304541 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:31.304576 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 00:01:31.304617 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 00:01:31.304651 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 00:01:31.304684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 00:01:31.304715 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 00:01:31.304826 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 00:01:31.304861 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 00:01:31.304893 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 00:01:31.304929 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 00:01:31.304965 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 00:01:31.305018 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 00:01:31.305055 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 00:01:31.305087 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 00:01:31.305119 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 00:01:31.305152 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 00:01:31.305188 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 00:01:31.305223 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 17 00:01:31.305258 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 17 00:01:31.305297 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 17 00:01:31.305909 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 00:01:31.306067 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 00:01:31.306108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 00:01:31.306141 systemd[1]: Reached target slices.target - Slice Units. Jan 17 00:01:31.306174 systemd[1]: Reached target swap.target - Swaps. Jan 17 00:01:31.306205 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 00:01:31.306236 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 00:01:31.306272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 00:01:31.306303 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 00:01:31.306332 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 00:01:31.306364 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 00:01:31.306394 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 00:01:31.306424 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 00:01:31.307477 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 00:01:31.307549 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 00:01:31.307581 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 00:01:31.307624 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 00:01:31.307658 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 00:01:31.307688 systemd[1]: Reached target machines.target - Containers. Jan 17 00:01:31.309272 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 00:01:31.309330 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:31.309362 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 00:01:31.309395 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 00:01:31.309436 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:31.309474 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:31.309514 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:31.309546 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 00:01:31.309588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:31.309619 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 00:01:31.309651 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 17 00:01:31.309683 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 17 00:01:31.309714 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 17 00:01:31.310819 systemd[1]: Stopped systemd-fsck-usr.service. Jan 17 00:01:31.310862 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 00:01:31.310893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 00:01:31.310924 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 00:01:31.310954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 00:01:31.310986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 00:01:31.311018 systemd[1]: verity-setup.service: Deactivated successfully. Jan 17 00:01:31.311051 systemd[1]: Stopped verity-setup.service. Jan 17 00:01:31.311083 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 00:01:31.311115 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 00:01:31.311150 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 00:01:31.311205 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 00:01:31.311239 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 00:01:31.311272 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 00:01:31.311304 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 00:01:31.311341 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 00:01:31.313558 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 00:01:31.313590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:31.313620 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:31.313649 kernel: fuse: init (API version 7.39) Jan 17 00:01:31.313683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:31.313713 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:31.313765 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 00:01:31.313798 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 00:01:31.313834 kernel: loop: module loaded Jan 17 00:01:31.313864 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 00:01:31.313897 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:31.313931 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:31.313960 kernel: ACPI: bus type drm_connector registered Jan 17 00:01:31.313992 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 00:01:31.314022 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 00:01:31.314051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:31.314081 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 00:01:31.314113 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 00:01:31.314142 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:31.314171 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:31.314202 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 00:01:31.314232 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 00:01:31.314272 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 00:01:31.314302 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 00:01:31.314333 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 00:01:31.314364 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 00:01:31.314396 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 00:01:31.314426 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 00:01:31.314502 systemd-journald[1574]: Collecting audit messages is disabled. Jan 17 00:01:31.314567 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 00:01:31.314598 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 00:01:31.314628 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:31.314657 systemd-journald[1574]: Journal started Jan 17 00:01:31.314711 systemd-journald[1574]: Runtime Journal (/run/log/journal/ec28c53dbba5be2a8cc6173a4d3c423a) is 8.0M, max 75.3M, 67.3M free. Jan 17 00:01:30.532846 systemd[1]: Queued start job for default target multi-user.target. Jan 17 00:01:30.585931 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 17 00:01:30.586703 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 17 00:01:31.334911 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 00:01:31.334992 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:31.349250 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 00:01:31.365437 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 00:01:31.371493 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 00:01:31.376822 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 00:01:31.380479 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 00:01:31.419870 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 00:01:31.445845 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Jan 17 00:01:31.446631 systemd-tmpfiles[1594]: ACLs are not supported, ignoring. Jan 17 00:01:31.453797 kernel: loop0: detected capacity change from 0 to 114328 Jan 17 00:01:31.462176 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 00:01:31.465795 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 00:01:31.470584 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 00:01:31.477817 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 00:01:31.486025 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 00:01:31.495206 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 00:01:31.524816 systemd-journald[1574]: Time spent on flushing to /var/log/journal/ec28c53dbba5be2a8cc6173a4d3c423a is 124.119ms for 912 entries. Jan 17 00:01:31.524816 systemd-journald[1574]: System Journal (/var/log/journal/ec28c53dbba5be2a8cc6173a4d3c423a) is 8.0M, max 195.6M, 187.6M free. Jan 17 00:01:31.667178 systemd-journald[1574]: Received client request to flush runtime journal. Jan 17 00:01:31.667254 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 00:01:31.667291 kernel: loop1: detected capacity change from 0 to 207008 Jan 17 00:01:31.600680 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 00:01:31.605427 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 00:01:31.640855 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 00:01:31.652472 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 00:01:31.672508 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 00:01:31.715170 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 00:01:31.730196 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 00:01:31.739770 kernel: loop2: detected capacity change from 0 to 114432 Jan 17 00:01:31.744489 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 00:01:31.746804 systemd-tmpfiles[1637]: ACLs are not supported, ignoring. Jan 17 00:01:31.763804 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 00:01:31.779524 udevadm[1644]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 17 00:01:31.853773 kernel: loop3: detected capacity change from 0 to 52536 Jan 17 00:01:31.923777 kernel: loop4: detected capacity change from 0 to 114328 Jan 17 00:01:31.943778 kernel: loop5: detected capacity change from 0 to 207008 Jan 17 00:01:31.984857 kernel: loop6: detected capacity change from 0 to 114432 Jan 17 00:01:32.002764 kernel: loop7: detected capacity change from 0 to 52536 Jan 17 00:01:32.022712 (sd-merge)[1648]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 17 00:01:32.023672 (sd-merge)[1648]: Merged extensions into '/usr'. Jan 17 00:01:32.036145 systemd[1]: Reloading requested from client PID 1604 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 00:01:32.036365 systemd[1]: Reloading... Jan 17 00:01:32.222760 zram_generator::config[1674]: No configuration found. Jan 17 00:01:32.504844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:32.622983 systemd[1]: Reloading finished in 585 ms. Jan 17 00:01:32.675842 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 00:01:32.690028 systemd[1]: Starting ensure-sysext.service... Jan 17 00:01:32.702230 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 00:01:32.720275 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 00:01:32.739335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 00:01:32.742432 systemd[1]: Reloading requested from client PID 1725 ('systemctl') (unit ensure-sysext.service)... Jan 17 00:01:32.742452 systemd[1]: Reloading... Jan 17 00:01:32.773906 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 00:01:32.774563 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 00:01:32.780321 systemd-tmpfiles[1726]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 00:01:32.780977 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Jan 17 00:01:32.781111 systemd-tmpfiles[1726]: ACLs are not supported, ignoring. Jan 17 00:01:32.792830 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:32.792850 systemd-tmpfiles[1726]: Skipping /boot Jan 17 00:01:32.832651 systemd-tmpfiles[1726]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 00:01:32.832864 systemd-tmpfiles[1726]: Skipping /boot Jan 17 00:01:32.882158 systemd-udevd[1728]: Using default interface naming scheme 'v255'. Jan 17 00:01:32.974752 zram_generator::config[1755]: No configuration found. Jan 17 00:01:33.014906 ldconfig[1600]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 00:01:33.133331 (udev-worker)[1792]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:33.387563 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:01:33.446808 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1763) Jan 17 00:01:33.533943 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 17 00:01:33.534168 systemd[1]: Reloading finished in 791 ms. Jan 17 00:01:33.561469 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 00:01:33.565780 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 00:01:33.571818 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 00:01:33.676047 systemd[1]: Finished ensure-sysext.service. Jan 17 00:01:33.711821 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 00:01:33.721178 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 17 00:01:33.735051 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:33.743036 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 00:01:33.746156 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 00:01:33.759125 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 00:01:33.768678 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 00:01:33.776166 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 00:01:33.784412 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 00:01:33.790830 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 00:01:33.793697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 00:01:33.798130 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 00:01:33.804223 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 00:01:33.819103 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 00:01:33.827041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 00:01:33.829591 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 00:01:33.839352 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 00:01:33.849079 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 00:01:33.868754 lvm[1927]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:33.918936 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 00:01:33.921966 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 00:01:33.923826 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 00:01:33.940783 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 00:01:33.945320 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 00:01:33.970785 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 00:01:33.974374 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 00:01:33.976857 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 00:01:33.987163 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 00:01:33.987461 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 00:01:33.991075 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 00:01:33.991601 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 00:01:34.009577 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 00:01:34.009924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 00:01:34.024055 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 00:01:34.048755 lvm[1954]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 00:01:34.045894 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:34.049322 augenrules[1961]: No rules Jan 17 00:01:34.058889 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 00:01:34.069108 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 00:01:34.074878 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 00:01:34.119593 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 00:01:34.132077 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 00:01:34.135836 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 00:01:34.141459 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 00:01:34.159773 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 00:01:34.215831 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 00:01:34.286675 systemd-networkd[1941]: lo: Link UP Jan 17 00:01:34.286698 systemd-networkd[1941]: lo: Gained carrier Jan 17 00:01:34.289908 systemd-networkd[1941]: Enumeration completed Jan 17 00:01:34.290127 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 00:01:34.295203 systemd-resolved[1942]: Positive Trust Anchors: Jan 17 00:01:34.295657 systemd-resolved[1942]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 00:01:34.295785 systemd-resolved[1942]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 00:01:34.296096 systemd-networkd[1941]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:34.296103 systemd-networkd[1941]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 00:01:34.301282 systemd-networkd[1941]: eth0: Link UP Jan 17 00:01:34.301680 systemd-networkd[1941]: eth0: Gained carrier Jan 17 00:01:34.301716 systemd-networkd[1941]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 00:01:34.302062 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 00:01:34.317866 systemd-networkd[1941]: eth0: DHCPv4 address 172.31.30.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 17 00:01:34.321616 systemd-resolved[1942]: Defaulting to hostname 'linux'. Jan 17 00:01:34.325220 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 00:01:34.328098 systemd[1]: Reached target network.target - Network. Jan 17 00:01:34.330130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 00:01:34.332995 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 00:01:34.335530 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 00:01:34.338401 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 00:01:34.341540 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 00:01:34.344126 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 00:01:34.346945 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 00:01:34.350003 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 00:01:34.350058 systemd[1]: Reached target paths.target - Path Units. Jan 17 00:01:34.352134 systemd[1]: Reached target timers.target - Timer Units. Jan 17 00:01:34.355130 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 00:01:34.360344 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 00:01:34.370175 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 00:01:34.373598 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 00:01:34.376206 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 00:01:34.378426 systemd[1]: Reached target basic.target - Basic System. Jan 17 00:01:34.381004 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:34.381059 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 00:01:34.387926 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 00:01:34.397356 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 00:01:34.404093 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 00:01:34.412255 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 00:01:34.417868 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 00:01:34.420304 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 00:01:34.432081 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 00:01:34.441093 systemd[1]: Started ntpd.service - Network Time Service. Jan 17 00:01:34.449559 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 00:01:34.456993 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 17 00:01:34.464222 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 00:01:34.473077 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 00:01:34.498047 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 00:01:34.502519 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 00:01:34.503426 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 00:01:34.506114 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 00:01:34.510863 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 00:01:34.528841 jq[1991]: false Jan 17 00:01:34.530483 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 00:01:34.532265 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 00:01:34.542589 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 00:01:34.543197 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 00:01:34.646262 dbus-daemon[1990]: [system] SELinux support is enabled Jan 17 00:01:34.646605 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 00:01:34.655679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 00:01:34.656852 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 00:01:34.659879 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 00:01:34.659917 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 00:01:34.678506 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 00:01:34.678095 dbus-daemon[1990]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1941 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 17 00:01:34.694343 jq[2003]: true Jan 17 00:01:34.680202 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 00:01:34.708630 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 17 00:01:34.720314 extend-filesystems[1992]: Found loop4 Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: ---------------------------------------------------- Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: corporation. Support and training for ntp-4 are Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: available at https://www.nwtime.org/support Jan 17 00:01:34.746737 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: ---------------------------------------------------- Jan 17 00:01:34.747669 tar[2016]: linux-arm64/LICENSE Jan 17 00:01:34.747669 tar[2016]: linux-arm64/helm Jan 17 00:01:34.735952 (ntainerd)[2019]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 00:01:34.748523 extend-filesystems[1992]: Found loop5 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found loop6 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found loop7 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p1 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p2 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p3 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found usr Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p4 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p6 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p7 Jan 17 00:01:34.748523 extend-filesystems[1992]: Found nvme0n1p9 Jan 17 00:01:34.748523 extend-filesystems[1992]: Checking size of /dev/nvme0n1p9 Jan 17 00:01:34.732348 ntpd[1994]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 17 00:01:34.819624 extend-filesystems[1992]: Resized partition /dev/nvme0n1p9 Jan 17 00:01:34.835471 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: proto: precision = 0.096 usec (-23) Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: basedate set to 2026-01-04 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: gps base set to 2026-01-04 (week 2400) Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listen normally on 3 eth0 172.31.30.130:123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listen normally on 4 lo [::1]:123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: bind(21) AF_INET6 fe80::452:a2ff:fe08:154b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: unable to create socket on eth0 (5) for fe80::452:a2ff:fe08:154b%2#123 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: failed to init interface for address fe80::452:a2ff:fe08:154b%2 Jan 17 00:01:34.835520 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: Listening on routing socket on fd #21 for interface updates Jan 17 00:01:34.732397 ntpd[1994]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 17 00:01:34.836311 jq[2031]: true Jan 17 00:01:34.842192 extend-filesystems[2038]: resize2fs 1.47.1 (20-May-2024) Jan 17 00:01:34.732418 ntpd[1994]: ---------------------------------------------------- Jan 17 00:01:34.852439 update_engine[2001]: I20260117 00:01:34.836710 2001 main.cc:92] Flatcar Update Engine starting Jan 17 00:01:34.732437 ntpd[1994]: ntp-4 is maintained by Network Time Foundation, Jan 17 00:01:34.732456 ntpd[1994]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 17 00:01:34.732475 ntpd[1994]: corporation. Support and training for ntp-4 are Jan 17 00:01:34.732493 ntpd[1994]: available at https://www.nwtime.org/support Jan 17 00:01:34.732511 ntpd[1994]: ---------------------------------------------------- Jan 17 00:01:34.752489 ntpd[1994]: proto: precision = 0.096 usec (-23) Jan 17 00:01:34.761352 ntpd[1994]: basedate set to 2026-01-04 Jan 17 00:01:34.761388 ntpd[1994]: gps base set to 2026-01-04 (week 2400) Jan 17 00:01:34.860359 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 17 00:01:34.790345 ntpd[1994]: Listen and drop on 0 v6wildcard [::]:123 Jan 17 00:01:34.790430 ntpd[1994]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 17 00:01:34.790713 ntpd[1994]: Listen normally on 2 lo 127.0.0.1:123 Jan 17 00:01:34.815569 ntpd[1994]: Listen normally on 3 eth0 172.31.30.130:123 Jan 17 00:01:34.815642 ntpd[1994]: Listen normally on 4 lo [::1]:123 Jan 17 00:01:34.815742 ntpd[1994]: bind(21) AF_INET6 fe80::452:a2ff:fe08:154b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:01:34.815786 ntpd[1994]: unable to create socket on eth0 (5) for fe80::452:a2ff:fe08:154b%2#123 Jan 17 00:01:34.815816 ntpd[1994]: failed to init interface for address fe80::452:a2ff:fe08:154b%2 Jan 17 00:01:34.815878 ntpd[1994]: Listening on routing socket on fd #21 for interface updates Jan 17 00:01:34.868570 systemd[1]: Started update-engine.service - Update Engine. Jan 17 00:01:34.872267 ntpd[1994]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:01:34.873881 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:01:34.873881 ntpd[1994]: 17 Jan 00:01:34 ntpd[1994]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:01:34.872340 ntpd[1994]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 17 00:01:34.877321 update_engine[2001]: I20260117 00:01:34.877233 2001 update_check_scheduler.cc:74] Next update check in 6m24s Jan 17 00:01:34.880140 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 00:01:35.007692 coreos-metadata[1989]: Jan 17 00:01:35.007 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:01:35.013590 coreos-metadata[1989]: Jan 17 00:01:35.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 17 00:01:35.020784 coreos-metadata[1989]: Jan 17 00:01:35.019 INFO Fetch successful Jan 17 00:01:35.020784 coreos-metadata[1989]: Jan 17 00:01:35.019 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 17 00:01:35.020784 coreos-metadata[1989]: Jan 17 00:01:35.020 INFO Fetch successful Jan 17 00:01:35.020784 coreos-metadata[1989]: Jan 17 00:01:35.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 17 00:01:35.023549 coreos-metadata[1989]: Jan 17 00:01:35.023 INFO Fetch successful Jan 17 00:01:35.024908 coreos-metadata[1989]: Jan 17 00:01:35.024 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 17 00:01:35.028676 systemd-logind[2000]: Watching system buttons on /dev/input/event0 (Power Button) Jan 17 00:01:35.036881 coreos-metadata[1989]: Jan 17 00:01:35.034 INFO Fetch successful Jan 17 00:01:35.036881 coreos-metadata[1989]: Jan 17 00:01:35.035 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 17 00:01:35.037097 coreos-metadata[1989]: Jan 17 00:01:35.036 INFO Fetch failed with 404: resource not found Jan 17 00:01:35.037097 coreos-metadata[1989]: Jan 17 00:01:35.036 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 17 00:01:35.038485 systemd-logind[2000]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 17 00:01:35.040893 coreos-metadata[1989]: Jan 17 00:01:35.039 INFO Fetch successful Jan 17 00:01:35.040893 coreos-metadata[1989]: Jan 17 00:01:35.039 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 17 00:01:35.038946 systemd-logind[2000]: New seat seat0. Jan 17 00:01:35.042324 coreos-metadata[1989]: Jan 17 00:01:35.042 INFO Fetch successful Jan 17 00:01:35.042324 coreos-metadata[1989]: Jan 17 00:01:35.042 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 17 00:01:35.044888 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 00:01:35.045296 coreos-metadata[1989]: Jan 17 00:01:35.045 INFO Fetch successful Jan 17 00:01:35.045687 coreos-metadata[1989]: Jan 17 00:01:35.045 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 17 00:01:35.052165 coreos-metadata[1989]: Jan 17 00:01:35.052 INFO Fetch successful Jan 17 00:01:35.052283 coreos-metadata[1989]: Jan 17 00:01:35.052 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 17 00:01:35.053948 coreos-metadata[1989]: Jan 17 00:01:35.053 INFO Fetch successful Jan 17 00:01:35.079761 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 17 00:01:35.099760 extend-filesystems[2038]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 17 00:01:35.099760 extend-filesystems[2038]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 17 00:01:35.099760 extend-filesystems[2038]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 17 00:01:35.114509 extend-filesystems[1992]: Resized filesystem in /dev/nvme0n1p9 Jan 17 00:01:35.148770 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1763) Jan 17 00:01:35.167169 bash[2068]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:35.176300 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 00:01:35.177944 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 00:01:35.184196 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 00:01:35.199779 systemd[1]: Starting sshkeys.service... Jan 17 00:01:35.254962 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 00:01:35.260202 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 00:01:35.309874 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 17 00:01:35.365029 dbus-daemon[1990]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 17 00:01:35.367503 dbus-daemon[1990]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2030 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 17 00:01:35.375245 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 17 00:01:35.379797 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 17 00:01:35.394176 systemd[1]: Starting polkit.service - Authorization Manager... Jan 17 00:01:35.516986 polkitd[2119]: Started polkitd version 121 Jan 17 00:01:35.537640 polkitd[2119]: Loading rules from directory /etc/polkit-1/rules.d Jan 17 00:01:35.537796 polkitd[2119]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 17 00:01:35.540693 polkitd[2119]: Finished loading, compiling and executing 2 rules Jan 17 00:01:35.543968 dbus-daemon[1990]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 17 00:01:35.544342 systemd[1]: Started polkit.service - Authorization Manager. Jan 17 00:01:35.548331 polkitd[2119]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 17 00:01:35.566117 locksmithd[2045]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 00:01:35.643010 systemd-resolved[1942]: System hostname changed to 'ip-172-31-30-130'. Jan 17 00:01:35.643020 systemd-hostnamed[2030]: Hostname set to (transient) Jan 17 00:01:35.734207 ntpd[1994]: bind(24) AF_INET6 fe80::452:a2ff:fe08:154b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:01:35.734927 ntpd[1994]: 17 Jan 00:01:35 ntpd[1994]: bind(24) AF_INET6 fe80::452:a2ff:fe08:154b%2#123 flags 0x11 failed: Cannot assign requested address Jan 17 00:01:35.734927 ntpd[1994]: 17 Jan 00:01:35 ntpd[1994]: unable to create socket on eth0 (6) for fe80::452:a2ff:fe08:154b%2#123 Jan 17 00:01:35.734927 ntpd[1994]: 17 Jan 00:01:35 ntpd[1994]: failed to init interface for address fe80::452:a2ff:fe08:154b%2 Jan 17 00:01:35.734270 ntpd[1994]: unable to create socket on eth0 (6) for fe80::452:a2ff:fe08:154b%2#123 Jan 17 00:01:35.734300 ntpd[1994]: failed to init interface for address fe80::452:a2ff:fe08:154b%2 Jan 17 00:01:35.756213 coreos-metadata[2107]: Jan 17 00:01:35.754 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 17 00:01:35.777157 coreos-metadata[2107]: Jan 17 00:01:35.775 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 17 00:01:35.777157 coreos-metadata[2107]: Jan 17 00:01:35.776 INFO Fetch successful Jan 17 00:01:35.777157 coreos-metadata[2107]: Jan 17 00:01:35.776 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 17 00:01:35.778853 coreos-metadata[2107]: Jan 17 00:01:35.777 INFO Fetch successful Jan 17 00:01:35.788030 unknown[2107]: wrote ssh authorized keys file for user: core Jan 17 00:01:35.879607 update-ssh-keys[2176]: Updated "/home/core/.ssh/authorized_keys" Jan 17 00:01:35.884102 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 17 00:01:35.894712 systemd[1]: Finished sshkeys.service. Jan 17 00:01:35.912094 containerd[2019]: time="2026-01-17T00:01:35.910809444Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 00:01:36.000927 systemd-networkd[1941]: eth0: Gained IPv6LL Jan 17 00:01:36.013842 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 00:01:36.017907 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 00:01:36.028213 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 17 00:01:36.042219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:36.056390 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 00:01:36.080259 containerd[2019]: time="2026-01-17T00:01:36.080136297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.099101541Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.099175929Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.099212373Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.100808277Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.100877349Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.101070753Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:36.101385 containerd[2019]: time="2026-01-17T00:01:36.101104497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.103081 containerd[2019]: time="2026-01-17T00:01:36.103008573Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:36.103081 containerd[2019]: time="2026-01-17T00:01:36.103074057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.103232 containerd[2019]: time="2026-01-17T00:01:36.103113093Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:36.103232 containerd[2019]: time="2026-01-17T00:01:36.103139913Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.106756 containerd[2019]: time="2026-01-17T00:01:36.103369521Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.111075 containerd[2019]: time="2026-01-17T00:01:36.110931477Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 00:01:36.111478 containerd[2019]: time="2026-01-17T00:01:36.111227241Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 00:01:36.111478 containerd[2019]: time="2026-01-17T00:01:36.111276621Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 00:01:36.111590 containerd[2019]: time="2026-01-17T00:01:36.111492741Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 00:01:36.111639 containerd[2019]: time="2026-01-17T00:01:36.111597549Z" level=info msg="metadata content store policy set" policy=shared Jan 17 00:01:36.120612 containerd[2019]: time="2026-01-17T00:01:36.120531261Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 00:01:36.121964 containerd[2019]: time="2026-01-17T00:01:36.121812453Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 00:01:36.122066 containerd[2019]: time="2026-01-17T00:01:36.122000817Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 00:01:36.122066 containerd[2019]: time="2026-01-17T00:01:36.122042409Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 00:01:36.122153 containerd[2019]: time="2026-01-17T00:01:36.122075493Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 00:01:36.122373 containerd[2019]: time="2026-01-17T00:01:36.122327241Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.122813949Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123081033Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123118365Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123148785Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123181893Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123212757Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123243609Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.123338 containerd[2019]: time="2026-01-17T00:01:36.123275757Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.130351 containerd[2019]: time="2026-01-17T00:01:36.123307533Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.131454 containerd[2019]: time="2026-01-17T00:01:36.131322321Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.131454 containerd[2019]: time="2026-01-17T00:01:36.131414505Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.131604 containerd[2019]: time="2026-01-17T00:01:36.131510661Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 00:01:36.131604 containerd[2019]: time="2026-01-17T00:01:36.131574105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.131739 containerd[2019]: time="2026-01-17T00:01:36.131616885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.131739 containerd[2019]: time="2026-01-17T00:01:36.131662209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.132800 containerd[2019]: time="2026-01-17T00:01:36.131707761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.132980 containerd[2019]: time="2026-01-17T00:01:36.132826305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.132980 containerd[2019]: time="2026-01-17T00:01:36.132880797Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.132980 containerd[2019]: time="2026-01-17T00:01:36.132922653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.132980 containerd[2019]: time="2026-01-17T00:01:36.132958869Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133148 containerd[2019]: time="2026-01-17T00:01:36.133001181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133148 containerd[2019]: time="2026-01-17T00:01:36.133053753Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133148 containerd[2019]: time="2026-01-17T00:01:36.133095081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133285 containerd[2019]: time="2026-01-17T00:01:36.133134969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133285 containerd[2019]: time="2026-01-17T00:01:36.133184901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.133285 containerd[2019]: time="2026-01-17T00:01:36.133233405Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 00:01:36.134080 containerd[2019]: time="2026-01-17T00:01:36.133298889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.134080 containerd[2019]: time="2026-01-17T00:01:36.133338825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.134080 containerd[2019]: time="2026-01-17T00:01:36.133376061Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 00:01:36.134080 containerd[2019]: time="2026-01-17T00:01:36.133619865Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 00:01:36.134080 containerd[2019]: time="2026-01-17T00:01:36.133670373Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 00:01:36.138745 containerd[2019]: time="2026-01-17T00:01:36.133707213Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 00:01:36.139644 containerd[2019]: time="2026-01-17T00:01:36.139488993Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 00:01:36.142633 containerd[2019]: time="2026-01-17T00:01:36.139547997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.142945 containerd[2019]: time="2026-01-17T00:01:36.142881381Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 00:01:36.143077 containerd[2019]: time="2026-01-17T00:01:36.143051673Z" level=info msg="NRI interface is disabled by configuration." Jan 17 00:01:36.144766 containerd[2019]: time="2026-01-17T00:01:36.143172165Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 00:01:36.144895 containerd[2019]: time="2026-01-17T00:01:36.143848293Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 00:01:36.144895 containerd[2019]: time="2026-01-17T00:01:36.143963517Z" level=info msg="Connect containerd service" Jan 17 00:01:36.144895 containerd[2019]: time="2026-01-17T00:01:36.144062469Z" level=info msg="using legacy CRI server" Jan 17 00:01:36.144895 containerd[2019]: time="2026-01-17T00:01:36.144082701Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 00:01:36.144895 containerd[2019]: time="2026-01-17T00:01:36.144241281Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 00:01:36.152189 containerd[2019]: time="2026-01-17T00:01:36.150173121Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152594313Z" level=info msg="Start subscribing containerd event" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152674293Z" level=info msg="Start recovering state" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152852133Z" level=info msg="Start event monitor" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152878581Z" level=info msg="Start snapshots syncer" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152912973Z" level=info msg="Start cni network conf syncer for default" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.152933337Z" level=info msg="Start streaming server" Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.153746853Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.153853653Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 00:01:36.161747 containerd[2019]: time="2026-01-17T00:01:36.155503521Z" level=info msg="containerd successfully booted in 0.249285s" Jan 17 00:01:36.154089 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 00:01:36.178247 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 00:01:36.253243 amazon-ssm-agent[2191]: Initializing new seelog logger Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: New Seelog Logger Creation Complete Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 processing appconfig overrides Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 processing appconfig overrides Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.257875 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 processing appconfig overrides Jan 17 00:01:36.261827 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO Proxy environment variables: Jan 17 00:01:36.267380 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.270213 amazon-ssm-agent[2191]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 17 00:01:36.270213 amazon-ssm-agent[2191]: 2026/01/17 00:01:36 processing appconfig overrides Jan 17 00:01:36.361827 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO https_proxy: Jan 17 00:01:36.461615 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO http_proxy: Jan 17 00:01:36.560828 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO no_proxy: Jan 17 00:01:36.660241 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO Checking if agent identity type OnPrem can be assumed Jan 17 00:01:36.710642 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 00:01:36.761753 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO Checking if agent identity type EC2 can be assumed Jan 17 00:01:36.763247 tar[2016]: linux-arm64/README.md Jan 17 00:01:36.800785 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 00:01:36.859740 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO Agent will take identity from EC2 Jan 17 00:01:36.934496 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:01:36.934496 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:01:36.934496 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 17 00:01:36.934496 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 17 00:01:36.934496 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] Starting Core Agent Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [Registrar] Starting registrar module Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [EC2Identity] EC2 registration was successful. Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [CredentialRefresher] credentialRefresher has started Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [CredentialRefresher] Starting credentials refresher loop Jan 17 00:01:36.934778 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 17 00:01:36.958047 amazon-ssm-agent[2191]: 2026-01-17 00:01:36 INFO [CredentialRefresher] Next credential rotation will be in 32.416656791166666 minutes Jan 17 00:01:37.316581 sshd_keygen[2025]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 00:01:37.361415 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 00:01:37.372093 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 00:01:37.381234 systemd[1]: Started sshd@0-172.31.30.130:22-68.220.241.50:50662.service - OpenSSH per-connection server daemon (68.220.241.50:50662). Jan 17 00:01:37.393352 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 00:01:37.395847 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 00:01:37.407323 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 00:01:37.452897 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 00:01:37.465842 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 00:01:37.472686 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 00:01:37.482276 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 00:01:37.924643 sshd[2225]: Accepted publickey for core from 68.220.241.50 port 50662 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:37.929651 sshd[2225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:37.934065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:37.939350 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 00:01:37.942161 systemd[1]: Startup finished in 1.177s (kernel) + 8.447s (initrd) + 8.731s (userspace) = 18.356s. Jan 17 00:01:37.950625 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:37.974352 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 00:01:37.984190 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 00:01:37.995670 systemd-logind[2000]: New session 1 of user core. Jan 17 00:01:38.007126 amazon-ssm-agent[2191]: 2026-01-17 00:01:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 17 00:01:38.035358 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 00:01:38.050683 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 00:01:38.073046 (systemd)[2248]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 00:01:38.109894 amazon-ssm-agent[2191]: 2026-01-17 00:01:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2245) started Jan 17 00:01:38.210262 amazon-ssm-agent[2191]: 2026-01-17 00:01:38 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 17 00:01:38.360459 systemd[2248]: Queued start job for default target default.target. Jan 17 00:01:38.369438 systemd[2248]: Created slice app.slice - User Application Slice. Jan 17 00:01:38.369501 systemd[2248]: Reached target paths.target - Paths. Jan 17 00:01:38.369535 systemd[2248]: Reached target timers.target - Timers. Jan 17 00:01:38.372903 systemd[2248]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 00:01:38.408752 systemd[2248]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 00:01:38.408988 systemd[2248]: Reached target sockets.target - Sockets. Jan 17 00:01:38.409021 systemd[2248]: Reached target basic.target - Basic System. Jan 17 00:01:38.409102 systemd[2248]: Reached target default.target - Main User Target. Jan 17 00:01:38.409162 systemd[2248]: Startup finished in 323ms. Jan 17 00:01:38.409799 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 00:01:38.430204 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 00:01:38.733415 ntpd[1994]: Listen normally on 7 eth0 [fe80::452:a2ff:fe08:154b%2]:123 Jan 17 00:01:38.734923 ntpd[1994]: 17 Jan 00:01:38 ntpd[1994]: Listen normally on 7 eth0 [fe80::452:a2ff:fe08:154b%2]:123 Jan 17 00:01:38.810198 systemd[1]: Started sshd@1-172.31.30.130:22-68.220.241.50:50674.service - OpenSSH per-connection server daemon (68.220.241.50:50674). Jan 17 00:01:39.043244 kubelet[2239]: E0117 00:01:39.043100 2239 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:39.048294 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:39.048644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:39.049164 systemd[1]: kubelet.service: Consumed 1.376s CPU time. Jan 17 00:01:39.321870 sshd[2272]: Accepted publickey for core from 68.220.241.50 port 50674 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:39.324295 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:39.333091 systemd-logind[2000]: New session 2 of user core. Jan 17 00:01:39.343012 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 00:01:39.682492 sshd[2272]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:39.689317 systemd[1]: sshd@1-172.31.30.130:22-68.220.241.50:50674.service: Deactivated successfully. Jan 17 00:01:39.693467 systemd[1]: session-2.scope: Deactivated successfully. Jan 17 00:01:39.694982 systemd-logind[2000]: Session 2 logged out. Waiting for processes to exit. Jan 17 00:01:39.697083 systemd-logind[2000]: Removed session 2. Jan 17 00:01:39.781191 systemd[1]: Started sshd@2-172.31.30.130:22-68.220.241.50:50690.service - OpenSSH per-connection server daemon (68.220.241.50:50690). Jan 17 00:01:40.270393 sshd[2280]: Accepted publickey for core from 68.220.241.50 port 50690 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:40.273058 sshd[2280]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:40.280643 systemd-logind[2000]: New session 3 of user core. Jan 17 00:01:40.290978 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 00:01:40.615973 sshd[2280]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:40.623093 systemd[1]: sshd@2-172.31.30.130:22-68.220.241.50:50690.service: Deactivated successfully. Jan 17 00:01:40.623157 systemd-logind[2000]: Session 3 logged out. Waiting for processes to exit. Jan 17 00:01:40.626645 systemd[1]: session-3.scope: Deactivated successfully. Jan 17 00:01:40.630015 systemd-logind[2000]: Removed session 3. Jan 17 00:01:40.727243 systemd[1]: Started sshd@3-172.31.30.130:22-68.220.241.50:50702.service - OpenSSH per-connection server daemon (68.220.241.50:50702). Jan 17 00:01:41.275890 sshd[2287]: Accepted publickey for core from 68.220.241.50 port 50702 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:41.278541 sshd[2287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:41.286101 systemd-logind[2000]: New session 4 of user core. Jan 17 00:01:41.296969 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 00:01:41.666028 sshd[2287]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:41.672609 systemd-logind[2000]: Session 4 logged out. Waiting for processes to exit. Jan 17 00:01:41.673578 systemd[1]: sshd@3-172.31.30.130:22-68.220.241.50:50702.service: Deactivated successfully. Jan 17 00:01:41.677484 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 00:01:41.682707 systemd-logind[2000]: Removed session 4. Jan 17 00:01:42.167122 systemd-resolved[1942]: Clock change detected. Flushing caches. Jan 17 00:01:42.199727 systemd[1]: Started sshd@4-172.31.30.130:22-68.220.241.50:50718.service - OpenSSH per-connection server daemon (68.220.241.50:50718). Jan 17 00:01:42.729341 sshd[2294]: Accepted publickey for core from 68.220.241.50 port 50718 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:42.732084 sshd[2294]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:42.740486 systemd-logind[2000]: New session 5 of user core. Jan 17 00:01:42.751447 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 00:01:43.079914 sudo[2297]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 00:01:43.080627 sudo[2297]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:43.097507 sudo[2297]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:43.181459 sshd[2294]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:43.188323 systemd[1]: sshd@4-172.31.30.130:22-68.220.241.50:50718.service: Deactivated successfully. Jan 17 00:01:43.191921 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 00:01:43.194659 systemd-logind[2000]: Session 5 logged out. Waiting for processes to exit. Jan 17 00:01:43.196463 systemd-logind[2000]: Removed session 5. Jan 17 00:01:43.276660 systemd[1]: Started sshd@5-172.31.30.130:22-68.220.241.50:50176.service - OpenSSH per-connection server daemon (68.220.241.50:50176). Jan 17 00:01:43.773946 sshd[2302]: Accepted publickey for core from 68.220.241.50 port 50176 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:43.776648 sshd[2302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:43.786288 systemd-logind[2000]: New session 6 of user core. Jan 17 00:01:43.794473 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 00:01:44.057194 sudo[2306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 00:01:44.058975 sudo[2306]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:44.066971 sudo[2306]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:44.078965 sudo[2305]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 00:01:44.079750 sudo[2305]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:44.103896 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:44.119466 auditctl[2309]: No rules Jan 17 00:01:44.120378 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 00:01:44.120788 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:44.130985 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 00:01:44.189962 augenrules[2327]: No rules Jan 17 00:01:44.193364 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 00:01:44.195937 sudo[2305]: pam_unix(sudo:session): session closed for user root Jan 17 00:01:44.275076 sshd[2302]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:44.282870 systemd[1]: sshd@5-172.31.30.130:22-68.220.241.50:50176.service: Deactivated successfully. Jan 17 00:01:44.283592 systemd-logind[2000]: Session 6 logged out. Waiting for processes to exit. Jan 17 00:01:44.287606 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 00:01:44.289360 systemd-logind[2000]: Removed session 6. Jan 17 00:01:44.373636 systemd[1]: Started sshd@6-172.31.30.130:22-68.220.241.50:50188.service - OpenSSH per-connection server daemon (68.220.241.50:50188). Jan 17 00:01:44.863062 sshd[2335]: Accepted publickey for core from 68.220.241.50 port 50188 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:44.865770 sshd[2335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:44.875399 systemd-logind[2000]: New session 7 of user core. Jan 17 00:01:44.881506 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 00:01:45.140117 sudo[2338]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 00:01:45.140787 sudo[2338]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 00:01:45.760671 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 00:01:45.761869 (dockerd)[2354]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 00:01:46.312087 dockerd[2354]: time="2026-01-17T00:01:46.311988308Z" level=info msg="Starting up" Jan 17 00:01:46.548394 systemd[1]: var-lib-docker-metacopy\x2dcheck2143621464-merged.mount: Deactivated successfully. Jan 17 00:01:46.563394 dockerd[2354]: time="2026-01-17T00:01:46.562998525Z" level=info msg="Loading containers: start." Jan 17 00:01:46.758273 kernel: Initializing XFRM netlink socket Jan 17 00:01:46.830374 (udev-worker)[2377]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:46.951341 systemd-networkd[1941]: docker0: Link UP Jan 17 00:01:46.974716 dockerd[2354]: time="2026-01-17T00:01:46.974636411Z" level=info msg="Loading containers: done." Jan 17 00:01:46.997422 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3372595448-merged.mount: Deactivated successfully. Jan 17 00:01:47.006773 dockerd[2354]: time="2026-01-17T00:01:47.006705164Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 00:01:47.007012 dockerd[2354]: time="2026-01-17T00:01:47.006873584Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 00:01:47.007162 dockerd[2354]: time="2026-01-17T00:01:47.007098740Z" level=info msg="Daemon has completed initialization" Jan 17 00:01:47.065265 dockerd[2354]: time="2026-01-17T00:01:47.064935080Z" level=info msg="API listen on /run/docker.sock" Jan 17 00:01:47.065525 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 00:01:48.252787 containerd[2019]: time="2026-01-17T00:01:48.252436090Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 17 00:01:48.973316 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862648987.mount: Deactivated successfully. Jan 17 00:01:49.732416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 00:01:49.743991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:01:50.148594 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:01:50.161551 (kubelet)[2560]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:01:50.270669 kubelet[2560]: E0117 00:01:50.270609 2560 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:01:50.279891 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:01:50.280680 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:01:50.723249 containerd[2019]: time="2026-01-17T00:01:50.722045042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:50.724929 containerd[2019]: time="2026-01-17T00:01:50.724882634Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 17 00:01:50.727038 containerd[2019]: time="2026-01-17T00:01:50.726995810Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:50.733379 containerd[2019]: time="2026-01-17T00:01:50.733301714Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:50.735898 containerd[2019]: time="2026-01-17T00:01:50.735848726Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.483350968s" Jan 17 00:01:50.736379 containerd[2019]: time="2026-01-17T00:01:50.736036538Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 17 00:01:50.737550 containerd[2019]: time="2026-01-17T00:01:50.737262350Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 17 00:01:52.453222 containerd[2019]: time="2026-01-17T00:01:52.453141027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.463299 containerd[2019]: time="2026-01-17T00:01:52.462387111Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 17 00:01:52.463299 containerd[2019]: time="2026-01-17T00:01:52.462431031Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.469503 containerd[2019]: time="2026-01-17T00:01:52.468618411Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:52.471098 containerd[2019]: time="2026-01-17T00:01:52.471029595Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.733708505s" Jan 17 00:01:52.471098 containerd[2019]: time="2026-01-17T00:01:52.471095463Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 17 00:01:52.471794 containerd[2019]: time="2026-01-17T00:01:52.471740895Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 17 00:01:53.971125 containerd[2019]: time="2026-01-17T00:01:53.971059962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.973178 containerd[2019]: time="2026-01-17T00:01:53.973122978Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 17 00:01:53.975002 containerd[2019]: time="2026-01-17T00:01:53.974050614Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.979957 containerd[2019]: time="2026-01-17T00:01:53.979895058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:53.982561 containerd[2019]: time="2026-01-17T00:01:53.982512330Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.510712947s" Jan 17 00:01:53.982747 containerd[2019]: time="2026-01-17T00:01:53.982715790Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 17 00:01:53.984547 containerd[2019]: time="2026-01-17T00:01:53.984498462Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 17 00:01:55.235458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049972332.mount: Deactivated successfully. Jan 17 00:01:55.831687 containerd[2019]: time="2026-01-17T00:01:55.831621175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.833388 containerd[2019]: time="2026-01-17T00:01:55.833288887Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 17 00:01:55.834678 containerd[2019]: time="2026-01-17T00:01:55.834468655Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.840090 containerd[2019]: time="2026-01-17T00:01:55.840005755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:55.841654 containerd[2019]: time="2026-01-17T00:01:55.841410751Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.856664189s" Jan 17 00:01:55.841654 containerd[2019]: time="2026-01-17T00:01:55.841477195Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 17 00:01:55.842732 containerd[2019]: time="2026-01-17T00:01:55.842344267Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 17 00:01:56.400772 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2948342526.mount: Deactivated successfully. Jan 17 00:01:57.734547 containerd[2019]: time="2026-01-17T00:01:57.734464605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.737896 containerd[2019]: time="2026-01-17T00:01:57.737830317Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 17 00:01:57.740920 containerd[2019]: time="2026-01-17T00:01:57.740854005Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.747315 containerd[2019]: time="2026-01-17T00:01:57.746355129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:57.748925 containerd[2019]: time="2026-01-17T00:01:57.748862577Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.906463326s" Jan 17 00:01:57.749066 containerd[2019]: time="2026-01-17T00:01:57.748923909Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 17 00:01:57.750221 containerd[2019]: time="2026-01-17T00:01:57.750160689Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 17 00:01:58.237498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2347981641.mount: Deactivated successfully. Jan 17 00:01:58.252910 containerd[2019]: time="2026-01-17T00:01:58.252832171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:58.254864 containerd[2019]: time="2026-01-17T00:01:58.254801983Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 17 00:01:58.257346 containerd[2019]: time="2026-01-17T00:01:58.257276851Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:58.263254 containerd[2019]: time="2026-01-17T00:01:58.262490047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:58.265164 containerd[2019]: time="2026-01-17T00:01:58.264041143Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 513.666554ms" Jan 17 00:01:58.265164 containerd[2019]: time="2026-01-17T00:01:58.264100327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 17 00:01:58.265486 containerd[2019]: time="2026-01-17T00:01:58.265435387Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 17 00:01:58.881838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2305833552.mount: Deactivated successfully. Jan 17 00:02:00.313424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 00:02:00.324465 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:00.701690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:00.714513 (kubelet)[2700]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 00:02:00.808231 kubelet[2700]: E0117 00:02:00.807129 2700 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 00:02:00.812089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 00:02:00.812487 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 00:02:01.791627 containerd[2019]: time="2026-01-17T00:02:01.791546461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:01.794560 containerd[2019]: time="2026-01-17T00:02:01.794499337Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 17 00:02:01.795660 containerd[2019]: time="2026-01-17T00:02:01.795578221Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:01.803351 containerd[2019]: time="2026-01-17T00:02:01.803300125Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:01.806314 containerd[2019]: time="2026-01-17T00:02:01.805846993Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.540351558s" Jan 17 00:02:01.806314 containerd[2019]: time="2026-01-17T00:02:01.805910905Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 17 00:02:06.112050 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 17 00:02:10.455142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:10.472628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:10.519074 systemd[1]: Reloading requested from client PID 2742 ('systemctl') (unit session-7.scope)... Jan 17 00:02:10.519106 systemd[1]: Reloading... Jan 17 00:02:10.758305 zram_generator::config[2788]: No configuration found. Jan 17 00:02:11.005375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:11.178222 systemd[1]: Reloading finished in 658 ms. Jan 17 00:02:11.278572 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:11.286130 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:02:11.286598 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:11.292822 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:11.604836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:11.630790 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:11.704495 kubelet[2847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:11.704495 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:11.704495 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:11.705051 kubelet[2847]: I0117 00:02:11.704602 2847 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:12.966727 kubelet[2847]: I0117 00:02:12.966654 2847 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:02:12.966727 kubelet[2847]: I0117 00:02:12.966709 2847 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:12.967531 kubelet[2847]: I0117 00:02:12.967195 2847 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:02:13.013910 kubelet[2847]: E0117 00:02:13.013842 2847 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.30.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.016150 kubelet[2847]: I0117 00:02:13.015604 2847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:13.026145 kubelet[2847]: E0117 00:02:13.026096 2847 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:13.026145 kubelet[2847]: I0117 00:02:13.026144 2847 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:13.036251 kubelet[2847]: I0117 00:02:13.035506 2847 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:02:13.036251 kubelet[2847]: I0117 00:02:13.035938 2847 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:13.036470 kubelet[2847]: I0117 00:02:13.035980 2847 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:02:13.036612 kubelet[2847]: I0117 00:02:13.036527 2847 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:13.036612 kubelet[2847]: I0117 00:02:13.036550 2847 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:02:13.036926 kubelet[2847]: I0117 00:02:13.036897 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:13.043746 kubelet[2847]: I0117 00:02:13.043550 2847 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:02:13.043746 kubelet[2847]: I0117 00:02:13.043618 2847 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:13.043746 kubelet[2847]: I0117 00:02:13.043655 2847 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:02:13.043746 kubelet[2847]: I0117 00:02:13.043675 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:13.049632 kubelet[2847]: W0117 00:02:13.048750 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-130&limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.049632 kubelet[2847]: E0117 00:02:13.048839 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-130&limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.050361 kubelet[2847]: W0117 00:02:13.050298 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.050539 kubelet[2847]: E0117 00:02:13.050510 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.050779 kubelet[2847]: I0117 00:02:13.050755 2847 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:13.054241 kubelet[2847]: I0117 00:02:13.052919 2847 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:02:13.054241 kubelet[2847]: W0117 00:02:13.053164 2847 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 00:02:13.055887 kubelet[2847]: I0117 00:02:13.055853 2847 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:02:13.056105 kubelet[2847]: I0117 00:02:13.056087 2847 server.go:1287] "Started kubelet" Jan 17 00:02:13.063432 kubelet[2847]: E0117 00:02:13.062928 2847 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-130.188b5bb9fdac3dd5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-130,UID:ip-172-31-30-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-130,},FirstTimestamp:2026-01-17 00:02:13.056052693 +0000 UTC m=+1.419009704,LastTimestamp:2026-01-17 00:02:13.056052693 +0000 UTC m=+1.419009704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-130,}" Jan 17 00:02:13.063643 kubelet[2847]: I0117 00:02:13.063456 2847 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:13.065015 kubelet[2847]: I0117 00:02:13.064956 2847 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:02:13.065352 kubelet[2847]: I0117 00:02:13.065267 2847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:13.065995 kubelet[2847]: I0117 00:02:13.065965 2847 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:13.069250 kubelet[2847]: I0117 00:02:13.069172 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:13.071363 kubelet[2847]: I0117 00:02:13.070745 2847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:13.075061 kubelet[2847]: E0117 00:02:13.074981 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-130\" not found" Jan 17 00:02:13.075258 kubelet[2847]: I0117 00:02:13.075102 2847 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:02:13.075649 kubelet[2847]: I0117 00:02:13.075601 2847 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:02:13.076295 kubelet[2847]: I0117 00:02:13.075813 2847 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:02:13.081881 kubelet[2847]: W0117 00:02:13.079446 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.081881 kubelet[2847]: E0117 00:02:13.079563 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.081881 kubelet[2847]: I0117 00:02:13.079840 2847 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:02:13.081881 kubelet[2847]: I0117 00:02:13.080062 2847 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:13.082237 kubelet[2847]: E0117 00:02:13.082161 2847 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:13.083092 kubelet[2847]: E0117 00:02:13.083023 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-130?timeout=10s\": dial tcp 172.31.30.130:6443: connect: connection refused" interval="200ms" Jan 17 00:02:13.083334 kubelet[2847]: I0117 00:02:13.083296 2847 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:02:13.104538 kubelet[2847]: I0117 00:02:13.104477 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:13.107089 kubelet[2847]: I0117 00:02:13.107049 2847 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:13.107314 kubelet[2847]: I0117 00:02:13.107295 2847 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:02:13.107454 kubelet[2847]: I0117 00:02:13.107433 2847 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:13.107553 kubelet[2847]: I0117 00:02:13.107535 2847 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:02:13.107733 kubelet[2847]: E0117 00:02:13.107694 2847 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:13.115676 kubelet[2847]: W0117 00:02:13.115610 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.117984 kubelet[2847]: E0117 00:02:13.117937 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.132760 kubelet[2847]: I0117 00:02:13.132715 2847 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:13.132970 kubelet[2847]: I0117 00:02:13.132946 2847 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:13.133486 kubelet[2847]: I0117 00:02:13.133061 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:13.136258 kubelet[2847]: I0117 00:02:13.136109 2847 policy_none.go:49] "None policy: Start" Jan 17 00:02:13.136258 kubelet[2847]: I0117 00:02:13.136159 2847 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:02:13.136258 kubelet[2847]: I0117 00:02:13.136233 2847 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:02:13.146634 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 17 00:02:13.164333 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 17 00:02:13.175429 kubelet[2847]: E0117 00:02:13.175376 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-130\" not found" Jan 17 00:02:13.179960 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 17 00:02:13.183307 kubelet[2847]: I0117 00:02:13.183274 2847 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:02:13.184519 kubelet[2847]: I0117 00:02:13.183752 2847 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:13.184519 kubelet[2847]: I0117 00:02:13.183784 2847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:13.184519 kubelet[2847]: I0117 00:02:13.184345 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:13.186502 kubelet[2847]: E0117 00:02:13.186377 2847 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:13.187066 kubelet[2847]: E0117 00:02:13.187024 2847 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-130\" not found" Jan 17 00:02:13.227645 systemd[1]: Created slice kubepods-burstable-pod2d04c4b8bc452373d2a69a7446ae71f0.slice - libcontainer container kubepods-burstable-pod2d04c4b8bc452373d2a69a7446ae71f0.slice. Jan 17 00:02:13.245837 kubelet[2847]: E0117 00:02:13.244714 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:13.251934 systemd[1]: Created slice kubepods-burstable-pod1d76bbf94bdd785a2a89536caaadfcc6.slice - libcontainer container kubepods-burstable-pod1d76bbf94bdd785a2a89536caaadfcc6.slice. Jan 17 00:02:13.262441 kubelet[2847]: E0117 00:02:13.262373 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:13.270869 systemd[1]: Created slice kubepods-burstable-podd5e9f6ece992bff35995563a3d8ec943.slice - libcontainer container kubepods-burstable-podd5e9f6ece992bff35995563a3d8ec943.slice. Jan 17 00:02:13.274793 kubelet[2847]: E0117 00:02:13.274735 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:13.276775 kubelet[2847]: I0117 00:02:13.276703 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:13.277030 kubelet[2847]: I0117 00:02:13.276923 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:13.277193 kubelet[2847]: I0117 00:02:13.277090 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:13.277503 kubelet[2847]: I0117 00:02:13.277368 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-ca-certs\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:13.277673 kubelet[2847]: I0117 00:02:13.277478 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:13.277852 kubelet[2847]: I0117 00:02:13.277643 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:13.277852 kubelet[2847]: I0117 00:02:13.277818 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:13.278143 kubelet[2847]: I0117 00:02:13.278015 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:13.278143 kubelet[2847]: I0117 00:02:13.278103 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e9f6ece992bff35995563a3d8ec943-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-130\" (UID: \"d5e9f6ece992bff35995563a3d8ec943\") " pod="kube-system/kube-scheduler-ip-172-31-30-130" Jan 17 00:02:13.284284 kubelet[2847]: E0117 00:02:13.284181 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-130?timeout=10s\": dial tcp 172.31.30.130:6443: connect: connection refused" interval="400ms" Jan 17 00:02:13.286989 kubelet[2847]: I0117 00:02:13.286425 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:13.287393 kubelet[2847]: E0117 00:02:13.287352 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.130:6443/api/v1/nodes\": dial tcp 172.31.30.130:6443: connect: connection refused" node="ip-172-31-30-130" Jan 17 00:02:13.490273 kubelet[2847]: I0117 00:02:13.489581 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:13.490273 kubelet[2847]: E0117 00:02:13.490022 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.130:6443/api/v1/nodes\": dial tcp 172.31.30.130:6443: connect: connection refused" node="ip-172-31-30-130" Jan 17 00:02:13.546875 containerd[2019]: time="2026-01-17T00:02:13.546815267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-130,Uid:2d04c4b8bc452373d2a69a7446ae71f0,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:13.564119 containerd[2019]: time="2026-01-17T00:02:13.564016547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-130,Uid:1d76bbf94bdd785a2a89536caaadfcc6,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:13.579237 containerd[2019]: time="2026-01-17T00:02:13.578918592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-130,Uid:d5e9f6ece992bff35995563a3d8ec943,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:13.685541 kubelet[2847]: E0117 00:02:13.685473 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-130?timeout=10s\": dial tcp 172.31.30.130:6443: connect: connection refused" interval="800ms" Jan 17 00:02:13.893254 kubelet[2847]: I0117 00:02:13.892711 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:13.893254 kubelet[2847]: E0117 00:02:13.893184 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.130:6443/api/v1/nodes\": dial tcp 172.31.30.130:6443: connect: connection refused" node="ip-172-31-30-130" Jan 17 00:02:13.900194 kubelet[2847]: W0117 00:02:13.900041 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.30.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-130&limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.900194 kubelet[2847]: E0117 00:02:13.900132 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.30.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-130&limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:13.994799 kubelet[2847]: W0117 00:02:13.994709 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.30.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:13.995364 kubelet[2847]: E0117 00:02:13.994805 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.30.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:14.035812 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount461069897.mount: Deactivated successfully. Jan 17 00:02:14.045521 containerd[2019]: time="2026-01-17T00:02:14.045443926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:14.051234 containerd[2019]: time="2026-01-17T00:02:14.051162670Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 17 00:02:14.052392 containerd[2019]: time="2026-01-17T00:02:14.052332058Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:14.055219 containerd[2019]: time="2026-01-17T00:02:14.055141378Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:14.057629 containerd[2019]: time="2026-01-17T00:02:14.057560530Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:14.060273 containerd[2019]: time="2026-01-17T00:02:14.059459386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:14.060273 containerd[2019]: time="2026-01-17T00:02:14.059855482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 00:02:14.062241 containerd[2019]: time="2026-01-17T00:02:14.062155918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 00:02:14.068846 containerd[2019]: time="2026-01-17T00:02:14.068134846Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 520.557579ms" Jan 17 00:02:14.072804 containerd[2019]: time="2026-01-17T00:02:14.072007078Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 507.807471ms" Jan 17 00:02:14.090307 containerd[2019]: time="2026-01-17T00:02:14.090195574Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 511.166318ms" Jan 17 00:02:14.099635 kubelet[2847]: W0117 00:02:14.099561 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.30.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:14.101389 kubelet[2847]: E0117 00:02:14.101332 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.30.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:14.249076 kubelet[2847]: W0117 00:02:14.248739 2847 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.30.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.30.130:6443: connect: connection refused Jan 17 00:02:14.249076 kubelet[2847]: E0117 00:02:14.248844 2847 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.30.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.30.130:6443: connect: connection refused" logger="UnhandledError" Jan 17 00:02:14.267383 containerd[2019]: time="2026-01-17T00:02:14.267024923Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:14.267617 containerd[2019]: time="2026-01-17T00:02:14.267312671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:14.267617 containerd[2019]: time="2026-01-17T00:02:14.267547511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.269163 containerd[2019]: time="2026-01-17T00:02:14.269039159Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.288431 containerd[2019]: time="2026-01-17T00:02:14.287026043Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:14.288431 containerd[2019]: time="2026-01-17T00:02:14.287126291Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:14.288431 containerd[2019]: time="2026-01-17T00:02:14.287163467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.288431 containerd[2019]: time="2026-01-17T00:02:14.287338559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.293622 containerd[2019]: time="2026-01-17T00:02:14.293289215Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:14.293622 containerd[2019]: time="2026-01-17T00:02:14.293421815Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:14.293622 containerd[2019]: time="2026-01-17T00:02:14.293460107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.294438 containerd[2019]: time="2026-01-17T00:02:14.294054311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:14.331539 systemd[1]: Started cri-containerd-73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755.scope - libcontainer container 73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755. Jan 17 00:02:14.349585 systemd[1]: Started cri-containerd-e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f.scope - libcontainer container e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f. Jan 17 00:02:14.365564 systemd[1]: Started cri-containerd-441a065c2c83888717da28320e9174e62ee6e10cd42fefdb4d7daef643169262.scope - libcontainer container 441a065c2c83888717da28320e9174e62ee6e10cd42fefdb4d7daef643169262. Jan 17 00:02:14.464114 containerd[2019]: time="2026-01-17T00:02:14.462653352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-130,Uid:1d76bbf94bdd785a2a89536caaadfcc6,Namespace:kube-system,Attempt:0,} returns sandbox id \"73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755\"" Jan 17 00:02:14.470670 containerd[2019]: time="2026-01-17T00:02:14.470598204Z" level=info msg="CreateContainer within sandbox \"73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 00:02:14.482478 containerd[2019]: time="2026-01-17T00:02:14.481878876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-130,Uid:d5e9f6ece992bff35995563a3d8ec943,Namespace:kube-system,Attempt:0,} returns sandbox id \"e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f\"" Jan 17 00:02:14.487295 kubelet[2847]: E0117 00:02:14.487066 2847 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-130?timeout=10s\": dial tcp 172.31.30.130:6443: connect: connection refused" interval="1.6s" Jan 17 00:02:14.492157 containerd[2019]: time="2026-01-17T00:02:14.491878152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-130,Uid:2d04c4b8bc452373d2a69a7446ae71f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"441a065c2c83888717da28320e9174e62ee6e10cd42fefdb4d7daef643169262\"" Jan 17 00:02:14.496903 containerd[2019]: time="2026-01-17T00:02:14.496432752Z" level=info msg="CreateContainer within sandbox \"e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 00:02:14.500960 containerd[2019]: time="2026-01-17T00:02:14.500726208Z" level=info msg="CreateContainer within sandbox \"441a065c2c83888717da28320e9174e62ee6e10cd42fefdb4d7daef643169262\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 00:02:14.516624 containerd[2019]: time="2026-01-17T00:02:14.516399840Z" level=info msg="CreateContainer within sandbox \"73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6\"" Jan 17 00:02:14.517573 containerd[2019]: time="2026-01-17T00:02:14.517507224Z" level=info msg="StartContainer for \"9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6\"" Jan 17 00:02:14.538890 containerd[2019]: time="2026-01-17T00:02:14.538729308Z" level=info msg="CreateContainer within sandbox \"441a065c2c83888717da28320e9174e62ee6e10cd42fefdb4d7daef643169262\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ca0eeac1524918ba43577ebc44eb63effef4d0761c4d9682882287b5dc45b00f\"" Jan 17 00:02:14.541869 containerd[2019]: time="2026-01-17T00:02:14.541491972Z" level=info msg="StartContainer for \"ca0eeac1524918ba43577ebc44eb63effef4d0761c4d9682882287b5dc45b00f\"" Jan 17 00:02:14.543354 containerd[2019]: time="2026-01-17T00:02:14.543285108Z" level=info msg="CreateContainer within sandbox \"e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3\"" Jan 17 00:02:14.544043 containerd[2019]: time="2026-01-17T00:02:14.544007472Z" level=info msg="StartContainer for \"009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3\"" Jan 17 00:02:14.591548 systemd[1]: Started cri-containerd-9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6.scope - libcontainer container 9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6. Jan 17 00:02:14.633623 systemd[1]: Started cri-containerd-009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3.scope - libcontainer container 009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3. Jan 17 00:02:14.648317 systemd[1]: Started cri-containerd-ca0eeac1524918ba43577ebc44eb63effef4d0761c4d9682882287b5dc45b00f.scope - libcontainer container ca0eeac1524918ba43577ebc44eb63effef4d0761c4d9682882287b5dc45b00f. Jan 17 00:02:14.697548 kubelet[2847]: I0117 00:02:14.697492 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:14.698027 kubelet[2847]: E0117 00:02:14.697962 2847 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.30.130:6443/api/v1/nodes\": dial tcp 172.31.30.130:6443: connect: connection refused" node="ip-172-31-30-130" Jan 17 00:02:14.763834 containerd[2019]: time="2026-01-17T00:02:14.762910165Z" level=info msg="StartContainer for \"9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6\" returns successfully" Jan 17 00:02:14.778958 containerd[2019]: time="2026-01-17T00:02:14.778753909Z" level=info msg="StartContainer for \"ca0eeac1524918ba43577ebc44eb63effef4d0761c4d9682882287b5dc45b00f\" returns successfully" Jan 17 00:02:14.793215 containerd[2019]: time="2026-01-17T00:02:14.792750782Z" level=info msg="StartContainer for \"009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3\" returns successfully" Jan 17 00:02:15.142859 kubelet[2847]: E0117 00:02:15.142799 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:15.157923 kubelet[2847]: E0117 00:02:15.157860 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:15.165859 kubelet[2847]: E0117 00:02:15.165810 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:16.166570 kubelet[2847]: E0117 00:02:16.166512 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:16.167110 kubelet[2847]: E0117 00:02:16.167041 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:16.168457 kubelet[2847]: E0117 00:02:16.168412 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:16.301402 kubelet[2847]: I0117 00:02:16.300847 2847 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:17.181964 kubelet[2847]: E0117 00:02:17.181912 2847 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:19.649691 kubelet[2847]: E0117 00:02:19.649629 2847 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-130\" not found" node="ip-172-31-30-130" Jan 17 00:02:19.692925 kubelet[2847]: I0117 00:02:19.691810 2847 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-130" Jan 17 00:02:19.782871 kubelet[2847]: I0117 00:02:19.782811 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-130" Jan 17 00:02:19.805909 kubelet[2847]: E0117 00:02:19.805531 2847 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-30-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-30-130" Jan 17 00:02:19.805909 kubelet[2847]: I0117 00:02:19.805590 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:19.813891 kubelet[2847]: E0117 00:02:19.813525 2847 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:19.813891 kubelet[2847]: I0117 00:02:19.813583 2847 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:19.822715 kubelet[2847]: E0117 00:02:19.822656 2847 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-30-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:20.053693 kubelet[2847]: I0117 00:02:20.053220 2847 apiserver.go:52] "Watching apiserver" Jan 17 00:02:20.076250 kubelet[2847]: I0117 00:02:20.076180 2847 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:02:21.020554 update_engine[2001]: I20260117 00:02:21.020460 2001 update_attempter.cc:509] Updating boot flags... Jan 17 00:02:21.161367 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3137) Jan 17 00:02:21.519313 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3136) Jan 17 00:02:22.115262 systemd[1]: Reloading requested from client PID 3306 ('systemctl') (unit session-7.scope)... Jan 17 00:02:22.115288 systemd[1]: Reloading... Jan 17 00:02:22.267307 zram_generator::config[3346]: No configuration found. Jan 17 00:02:22.599441 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:02:22.808998 systemd[1]: Reloading finished in 693 ms. Jan 17 00:02:22.885533 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:22.901987 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:02:22.902418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:22.902502 systemd[1]: kubelet.service: Consumed 2.200s CPU time, 132.5M memory peak, 0B memory swap peak. Jan 17 00:02:22.910831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:02:23.249978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:02:23.266889 (kubelet)[3407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:02:23.393351 kubelet[3407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:23.394534 kubelet[3407]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:02:23.394534 kubelet[3407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:02:23.394534 kubelet[3407]: I0117 00:02:23.393986 3407 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:02:23.408172 kubelet[3407]: I0117 00:02:23.408105 3407 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:02:23.408172 kubelet[3407]: I0117 00:02:23.408153 3407 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:02:23.411350 kubelet[3407]: I0117 00:02:23.409466 3407 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:02:23.411953 kubelet[3407]: I0117 00:02:23.411922 3407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:02:23.416148 kubelet[3407]: I0117 00:02:23.416091 3407 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:02:23.425918 kubelet[3407]: E0117 00:02:23.425858 3407 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:02:23.426124 kubelet[3407]: I0117 00:02:23.426101 3407 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:02:23.434282 kubelet[3407]: I0117 00:02:23.434192 3407 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:02:23.434911 kubelet[3407]: I0117 00:02:23.434853 3407 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:02:23.435669 kubelet[3407]: I0117 00:02:23.435022 3407 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-30-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 17 00:02:23.436026 kubelet[3407]: I0117 00:02:23.435999 3407 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:02:23.436152 kubelet[3407]: I0117 00:02:23.436132 3407 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:02:23.437193 kubelet[3407]: I0117 00:02:23.437133 3407 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:23.437954 kubelet[3407]: I0117 00:02:23.437554 3407 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:02:23.437954 kubelet[3407]: I0117 00:02:23.437596 3407 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:02:23.437954 kubelet[3407]: I0117 00:02:23.437640 3407 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:02:23.437954 kubelet[3407]: I0117 00:02:23.437664 3407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:02:23.446543 kubelet[3407]: I0117 00:02:23.446063 3407 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:02:23.448244 kubelet[3407]: I0117 00:02:23.448162 3407 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:02:23.448980 kubelet[3407]: I0117 00:02:23.448926 3407 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:02:23.449084 kubelet[3407]: I0117 00:02:23.448996 3407 server.go:1287] "Started kubelet" Jan 17 00:02:23.455271 kubelet[3407]: I0117 00:02:23.455164 3407 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:02:23.456985 kubelet[3407]: I0117 00:02:23.456952 3407 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:02:23.457582 kubelet[3407]: I0117 00:02:23.457487 3407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:02:23.458283 kubelet[3407]: I0117 00:02:23.457930 3407 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:02:23.462281 kubelet[3407]: I0117 00:02:23.462246 3407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:02:23.464985 kubelet[3407]: I0117 00:02:23.464930 3407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:02:23.475353 kubelet[3407]: I0117 00:02:23.475320 3407 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:02:23.478028 kubelet[3407]: E0117 00:02:23.477973 3407 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-30-130\" not found" Jan 17 00:02:23.478753 kubelet[3407]: I0117 00:02:23.478720 3407 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:02:23.479251 kubelet[3407]: I0117 00:02:23.479053 3407 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:02:23.505027 kubelet[3407]: I0117 00:02:23.502979 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:02:23.508189 kubelet[3407]: I0117 00:02:23.508146 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:02:23.510262 kubelet[3407]: I0117 00:02:23.508414 3407 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:02:23.510262 kubelet[3407]: I0117 00:02:23.508455 3407 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:02:23.510262 kubelet[3407]: I0117 00:02:23.508470 3407 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:02:23.510262 kubelet[3407]: E0117 00:02:23.508541 3407 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:02:23.543438 kubelet[3407]: I0117 00:02:23.542146 3407 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:02:23.564084 kubelet[3407]: I0117 00:02:23.564014 3407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:02:23.564539 kubelet[3407]: E0117 00:02:23.550715 3407 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:02:23.574941 kubelet[3407]: I0117 00:02:23.572062 3407 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:02:23.609116 kubelet[3407]: E0117 00:02:23.609072 3407 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:02:23.671120 kubelet[3407]: I0117 00:02:23.671055 3407 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:02:23.671120 kubelet[3407]: I0117 00:02:23.671113 3407 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671175 3407 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671496 3407 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671518 3407 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671551 3407 policy_none.go:49] "None policy: Start" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671572 3407 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671592 3407 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:02:23.672019 kubelet[3407]: I0117 00:02:23.671771 3407 state_mem.go:75] "Updated machine memory state" Jan 17 00:02:23.681806 kubelet[3407]: I0117 00:02:23.679931 3407 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:02:23.681806 kubelet[3407]: I0117 00:02:23.680238 3407 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:02:23.681806 kubelet[3407]: I0117 00:02:23.680259 3407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:02:23.681806 kubelet[3407]: I0117 00:02:23.680922 3407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:02:23.692388 kubelet[3407]: E0117 00:02:23.692348 3407 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:02:23.800050 kubelet[3407]: I0117 00:02:23.799417 3407 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-30-130" Jan 17 00:02:23.812224 kubelet[3407]: I0117 00:02:23.810917 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-30-130" Jan 17 00:02:23.812224 kubelet[3407]: I0117 00:02:23.811146 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.812907 kubelet[3407]: I0117 00:02:23.810916 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:23.816454 kubelet[3407]: I0117 00:02:23.815927 3407 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-30-130" Jan 17 00:02:23.816454 kubelet[3407]: I0117 00:02:23.816041 3407 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-30-130" Jan 17 00:02:23.889904 kubelet[3407]: I0117 00:02:23.889841 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.890483 kubelet[3407]: I0117 00:02:23.890450 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.890839 kubelet[3407]: I0117 00:02:23.890690 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.890839 kubelet[3407]: I0117 00:02:23.890744 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.890839 kubelet[3407]: I0117 00:02:23.890807 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-ca-certs\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:23.891055 kubelet[3407]: I0117 00:02:23.890871 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:23.891055 kubelet[3407]: I0117 00:02:23.890943 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2d04c4b8bc452373d2a69a7446ae71f0-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-130\" (UID: \"2d04c4b8bc452373d2a69a7446ae71f0\") " pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:23.892488 kubelet[3407]: I0117 00:02:23.890996 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1d76bbf94bdd785a2a89536caaadfcc6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-130\" (UID: \"1d76bbf94bdd785a2a89536caaadfcc6\") " pod="kube-system/kube-controller-manager-ip-172-31-30-130" Jan 17 00:02:23.892488 kubelet[3407]: I0117 00:02:23.892392 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d5e9f6ece992bff35995563a3d8ec943-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-130\" (UID: \"d5e9f6ece992bff35995563a3d8ec943\") " pod="kube-system/kube-scheduler-ip-172-31-30-130" Jan 17 00:02:24.452947 kubelet[3407]: I0117 00:02:24.452878 3407 apiserver.go:52] "Watching apiserver" Jan 17 00:02:24.481240 kubelet[3407]: I0117 00:02:24.479621 3407 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:02:24.572079 kubelet[3407]: I0117 00:02:24.570766 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-130" podStartSLOduration=1.570730378 podStartE2EDuration="1.570730378s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:24.555185638 +0000 UTC m=+1.278096535" watchObservedRunningTime="2026-01-17 00:02:24.570730378 +0000 UTC m=+1.293641251" Jan 17 00:02:24.573031 kubelet[3407]: I0117 00:02:24.572826 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-130" podStartSLOduration=1.572777002 podStartE2EDuration="1.572777002s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:24.57058993 +0000 UTC m=+1.293500827" watchObservedRunningTime="2026-01-17 00:02:24.572777002 +0000 UTC m=+1.295687875" Jan 17 00:02:24.626241 kubelet[3407]: I0117 00:02:24.625334 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:24.636781 kubelet[3407]: I0117 00:02:24.636526 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-130" podStartSLOduration=1.636499906 podStartE2EDuration="1.636499906s" podCreationTimestamp="2026-01-17 00:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:24.60647029 +0000 UTC m=+1.329381175" watchObservedRunningTime="2026-01-17 00:02:24.636499906 +0000 UTC m=+1.359410767" Jan 17 00:02:24.640895 kubelet[3407]: E0117 00:02:24.640536 3407 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-30-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-130" Jan 17 00:02:27.096078 kubelet[3407]: I0117 00:02:27.095893 3407 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:02:27.096834 containerd[2019]: time="2026-01-17T00:02:27.096429119Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:02:27.097347 kubelet[3407]: I0117 00:02:27.096969 3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:02:27.979094 systemd[1]: Created slice kubepods-besteffort-podd20439df_7461_49a2_9e14_3fe0c79a92bd.slice - libcontainer container kubepods-besteffort-podd20439df_7461_49a2_9e14_3fe0c79a92bd.slice. Jan 17 00:02:28.019764 kubelet[3407]: I0117 00:02:28.019690 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d20439df-7461-49a2-9e14-3fe0c79a92bd-xtables-lock\") pod \"kube-proxy-f2pqr\" (UID: \"d20439df-7461-49a2-9e14-3fe0c79a92bd\") " pod="kube-system/kube-proxy-f2pqr" Jan 17 00:02:28.019764 kubelet[3407]: I0117 00:02:28.019765 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d20439df-7461-49a2-9e14-3fe0c79a92bd-lib-modules\") pod \"kube-proxy-f2pqr\" (UID: \"d20439df-7461-49a2-9e14-3fe0c79a92bd\") " pod="kube-system/kube-proxy-f2pqr" Jan 17 00:02:28.019997 kubelet[3407]: I0117 00:02:28.019804 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78mcc\" (UniqueName: \"kubernetes.io/projected/d20439df-7461-49a2-9e14-3fe0c79a92bd-kube-api-access-78mcc\") pod \"kube-proxy-f2pqr\" (UID: \"d20439df-7461-49a2-9e14-3fe0c79a92bd\") " pod="kube-system/kube-proxy-f2pqr" Jan 17 00:02:28.019997 kubelet[3407]: I0117 00:02:28.019845 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d20439df-7461-49a2-9e14-3fe0c79a92bd-kube-proxy\") pod \"kube-proxy-f2pqr\" (UID: \"d20439df-7461-49a2-9e14-3fe0c79a92bd\") " pod="kube-system/kube-proxy-f2pqr" Jan 17 00:02:28.131056 systemd[1]: Created slice kubepods-besteffort-poda169510f_4b9a_4c19_88e7_2dddde459292.slice - libcontainer container kubepods-besteffort-poda169510f_4b9a_4c19_88e7_2dddde459292.slice. Jan 17 00:02:28.221548 kubelet[3407]: I0117 00:02:28.221407 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a169510f-4b9a-4c19-88e7-2dddde459292-var-lib-calico\") pod \"tigera-operator-7dcd859c48-xqddr\" (UID: \"a169510f-4b9a-4c19-88e7-2dddde459292\") " pod="tigera-operator/tigera-operator-7dcd859c48-xqddr" Jan 17 00:02:28.221548 kubelet[3407]: I0117 00:02:28.221471 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gpbfb\" (UniqueName: \"kubernetes.io/projected/a169510f-4b9a-4c19-88e7-2dddde459292-kube-api-access-gpbfb\") pod \"tigera-operator-7dcd859c48-xqddr\" (UID: \"a169510f-4b9a-4c19-88e7-2dddde459292\") " pod="tigera-operator/tigera-operator-7dcd859c48-xqddr" Jan 17 00:02:28.292911 containerd[2019]: time="2026-01-17T00:02:28.291849553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2pqr,Uid:d20439df-7461-49a2-9e14-3fe0c79a92bd,Namespace:kube-system,Attempt:0,}" Jan 17 00:02:28.342943 containerd[2019]: time="2026-01-17T00:02:28.340611025Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:28.342943 containerd[2019]: time="2026-01-17T00:02:28.340791793Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:28.342943 containerd[2019]: time="2026-01-17T00:02:28.340870825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:28.342943 containerd[2019]: time="2026-01-17T00:02:28.341125945Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:28.407525 systemd[1]: Started cri-containerd-3b53c66697a61b9e54abc8dd04f0ea1d6b88c446e1d92dfe0e972763d66a5c92.scope - libcontainer container 3b53c66697a61b9e54abc8dd04f0ea1d6b88c446e1d92dfe0e972763d66a5c92. Jan 17 00:02:28.442816 containerd[2019]: time="2026-01-17T00:02:28.442763017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xqddr,Uid:a169510f-4b9a-4c19-88e7-2dddde459292,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:02:28.457027 containerd[2019]: time="2026-01-17T00:02:28.456829729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-f2pqr,Uid:d20439df-7461-49a2-9e14-3fe0c79a92bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b53c66697a61b9e54abc8dd04f0ea1d6b88c446e1d92dfe0e972763d66a5c92\"" Jan 17 00:02:28.465318 containerd[2019]: time="2026-01-17T00:02:28.464886217Z" level=info msg="CreateContainer within sandbox \"3b53c66697a61b9e54abc8dd04f0ea1d6b88c446e1d92dfe0e972763d66a5c92\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:02:28.506418 containerd[2019]: time="2026-01-17T00:02:28.506284190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:28.506574 containerd[2019]: time="2026-01-17T00:02:28.506450846Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:28.506574 containerd[2019]: time="2026-01-17T00:02:28.506516054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:28.508337 containerd[2019]: time="2026-01-17T00:02:28.506897450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:28.512909 containerd[2019]: time="2026-01-17T00:02:28.512781266Z" level=info msg="CreateContainer within sandbox \"3b53c66697a61b9e54abc8dd04f0ea1d6b88c446e1d92dfe0e972763d66a5c92\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c4b9449dd7fd15a5667540676c462e3e3c9e79244ce2a4aa6d9f8939484335b1\"" Jan 17 00:02:28.515564 containerd[2019]: time="2026-01-17T00:02:28.515429582Z" level=info msg="StartContainer for \"c4b9449dd7fd15a5667540676c462e3e3c9e79244ce2a4aa6d9f8939484335b1\"" Jan 17 00:02:28.551966 systemd[1]: Started cri-containerd-12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad.scope - libcontainer container 12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad. Jan 17 00:02:28.605521 systemd[1]: Started cri-containerd-c4b9449dd7fd15a5667540676c462e3e3c9e79244ce2a4aa6d9f8939484335b1.scope - libcontainer container c4b9449dd7fd15a5667540676c462e3e3c9e79244ce2a4aa6d9f8939484335b1. Jan 17 00:02:28.650388 containerd[2019]: time="2026-01-17T00:02:28.650313746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-xqddr,Uid:a169510f-4b9a-4c19-88e7-2dddde459292,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad\"" Jan 17 00:02:28.655687 containerd[2019]: time="2026-01-17T00:02:28.655625402Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:02:28.692749 containerd[2019]: time="2026-01-17T00:02:28.692667519Z" level=info msg="StartContainer for \"c4b9449dd7fd15a5667540676c462e3e3c9e79244ce2a4aa6d9f8939484335b1\" returns successfully" Jan 17 00:02:29.684110 kubelet[3407]: I0117 00:02:29.684001 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f2pqr" podStartSLOduration=2.683979556 podStartE2EDuration="2.683979556s" podCreationTimestamp="2026-01-17 00:02:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:02:29.663539163 +0000 UTC m=+6.386450036" watchObservedRunningTime="2026-01-17 00:02:29.683979556 +0000 UTC m=+6.406890417" Jan 17 00:02:30.550153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368514110.mount: Deactivated successfully. Jan 17 00:02:31.317601 containerd[2019]: time="2026-01-17T00:02:31.317518768Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:31.319391 containerd[2019]: time="2026-01-17T00:02:31.319152664Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:02:31.322716 containerd[2019]: time="2026-01-17T00:02:31.321662932Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:31.328571 containerd[2019]: time="2026-01-17T00:02:31.328491100Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:31.331729 containerd[2019]: time="2026-01-17T00:02:31.331530532Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.67584111s" Jan 17 00:02:31.331729 containerd[2019]: time="2026-01-17T00:02:31.331586812Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:02:31.336099 containerd[2019]: time="2026-01-17T00:02:31.336004684Z" level=info msg="CreateContainer within sandbox \"12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:02:31.361543 containerd[2019]: time="2026-01-17T00:02:31.361478476Z" level=info msg="CreateContainer within sandbox \"12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a\"" Jan 17 00:02:31.363048 containerd[2019]: time="2026-01-17T00:02:31.362855620Z" level=info msg="StartContainer for \"dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a\"" Jan 17 00:02:31.415743 systemd[1]: Started cri-containerd-dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a.scope - libcontainer container dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a. Jan 17 00:02:31.476072 containerd[2019]: time="2026-01-17T00:02:31.475904704Z" level=info msg="StartContainer for \"dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a\" returns successfully" Jan 17 00:02:31.675841 kubelet[3407]: I0117 00:02:31.675620 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-xqddr" podStartSLOduration=0.995690487 podStartE2EDuration="3.675593117s" podCreationTimestamp="2026-01-17 00:02:28 +0000 UTC" firstStartedPulling="2026-01-17 00:02:28.652857458 +0000 UTC m=+5.375768307" lastFinishedPulling="2026-01-17 00:02:31.332760088 +0000 UTC m=+8.055670937" observedRunningTime="2026-01-17 00:02:31.674282333 +0000 UTC m=+8.397193218" watchObservedRunningTime="2026-01-17 00:02:31.675593117 +0000 UTC m=+8.398504026" Jan 17 00:02:40.291520 sudo[2338]: pam_unix(sudo:session): session closed for user root Jan 17 00:02:40.368461 sshd[2335]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:40.379831 systemd[1]: sshd@6-172.31.30.130:22-68.220.241.50:50188.service: Deactivated successfully. Jan 17 00:02:40.392164 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:02:40.393724 systemd[1]: session-7.scope: Consumed 12.209s CPU time, 151.3M memory peak, 0B memory swap peak. Jan 17 00:02:40.396954 systemd-logind[2000]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:02:40.401452 systemd-logind[2000]: Removed session 7. Jan 17 00:02:56.563546 systemd[1]: Created slice kubepods-besteffort-pod055e7d0a_47f0_4288_9ef4_6ed45dab0cf3.slice - libcontainer container kubepods-besteffort-pod055e7d0a_47f0_4288_9ef4_6ed45dab0cf3.slice. Jan 17 00:02:56.618635 kubelet[3407]: I0117 00:02:56.618309 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/055e7d0a-47f0-4288-9ef4-6ed45dab0cf3-typha-certs\") pod \"calico-typha-5976586b5-qjgrm\" (UID: \"055e7d0a-47f0-4288-9ef4-6ed45dab0cf3\") " pod="calico-system/calico-typha-5976586b5-qjgrm" Jan 17 00:02:56.618635 kubelet[3407]: I0117 00:02:56.618422 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5fhd\" (UniqueName: \"kubernetes.io/projected/055e7d0a-47f0-4288-9ef4-6ed45dab0cf3-kube-api-access-g5fhd\") pod \"calico-typha-5976586b5-qjgrm\" (UID: \"055e7d0a-47f0-4288-9ef4-6ed45dab0cf3\") " pod="calico-system/calico-typha-5976586b5-qjgrm" Jan 17 00:02:56.618635 kubelet[3407]: I0117 00:02:56.618544 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/055e7d0a-47f0-4288-9ef4-6ed45dab0cf3-tigera-ca-bundle\") pod \"calico-typha-5976586b5-qjgrm\" (UID: \"055e7d0a-47f0-4288-9ef4-6ed45dab0cf3\") " pod="calico-system/calico-typha-5976586b5-qjgrm" Jan 17 00:02:56.804033 systemd[1]: Created slice kubepods-besteffort-pod2f325039_35a3_48b4_a8e4_1286c1c1603a.slice - libcontainer container kubepods-besteffort-pod2f325039_35a3_48b4_a8e4_1286c1c1603a.slice. Jan 17 00:02:56.820742 kubelet[3407]: I0117 00:02:56.820135 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-xtables-lock\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.820978 kubelet[3407]: I0117 00:02:56.820946 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5cbqv\" (UniqueName: \"kubernetes.io/projected/2f325039-35a3-48b4-a8e4-1286c1c1603a-kube-api-access-5cbqv\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.821711 kubelet[3407]: I0117 00:02:56.821093 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-var-run-calico\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.821711 kubelet[3407]: I0117 00:02:56.821161 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-cni-log-dir\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.821711 kubelet[3407]: I0117 00:02:56.821232 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-cni-net-dir\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.821711 kubelet[3407]: I0117 00:02:56.821275 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-lib-modules\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.821711 kubelet[3407]: I0117 00:02:56.821322 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2f325039-35a3-48b4-a8e4-1286c1c1603a-tigera-ca-bundle\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.822031 kubelet[3407]: I0117 00:02:56.821364 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-policysync\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.822031 kubelet[3407]: I0117 00:02:56.821403 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-var-lib-calico\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.822031 kubelet[3407]: I0117 00:02:56.821441 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-cni-bin-dir\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.822031 kubelet[3407]: I0117 00:02:56.821479 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2f325039-35a3-48b4-a8e4-1286c1c1603a-flexvol-driver-host\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.822031 kubelet[3407]: I0117 00:02:56.821514 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2f325039-35a3-48b4-a8e4-1286c1c1603a-node-certs\") pod \"calico-node-6frmc\" (UID: \"2f325039-35a3-48b4-a8e4-1286c1c1603a\") " pod="calico-system/calico-node-6frmc" Jan 17 00:02:56.870353 containerd[2019]: time="2026-01-17T00:02:56.870255439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5976586b5-qjgrm,Uid:055e7d0a-47f0-4288-9ef4-6ed45dab0cf3,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:56.928878 kubelet[3407]: E0117 00:02:56.927440 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:02:56.932319 kubelet[3407]: E0117 00:02:56.932258 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.932319 kubelet[3407]: W0117 00:02:56.932303 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.932537 kubelet[3407]: E0117 00:02:56.932343 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.936025 kubelet[3407]: E0117 00:02:56.935958 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.936363 kubelet[3407]: W0117 00:02:56.936325 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.936663 kubelet[3407]: E0117 00:02:56.936628 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.938116 kubelet[3407]: E0117 00:02:56.937190 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.938332 kubelet[3407]: W0117 00:02:56.938231 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.938768 kubelet[3407]: E0117 00:02:56.938398 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.938944 containerd[2019]: time="2026-01-17T00:02:56.934679467Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:56.938944 containerd[2019]: time="2026-01-17T00:02:56.934838071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:56.938944 containerd[2019]: time="2026-01-17T00:02:56.935252575Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:56.939904 containerd[2019]: time="2026-01-17T00:02:56.935489731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:56.943245 kubelet[3407]: E0117 00:02:56.942978 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.943245 kubelet[3407]: W0117 00:02:56.943025 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.943245 kubelet[3407]: E0117 00:02:56.943083 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.946486 kubelet[3407]: E0117 00:02:56.945740 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.946486 kubelet[3407]: W0117 00:02:56.945772 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.946486 kubelet[3407]: E0117 00:02:56.946387 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.954233 kubelet[3407]: E0117 00:02:56.952431 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.954233 kubelet[3407]: W0117 00:02:56.952586 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.954233 kubelet[3407]: E0117 00:02:56.953483 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.959779 kubelet[3407]: E0117 00:02:56.959628 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.959779 kubelet[3407]: W0117 00:02:56.959669 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.959779 kubelet[3407]: E0117 00:02:56.959705 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.971598 kubelet[3407]: E0117 00:02:56.968956 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.971598 kubelet[3407]: W0117 00:02:56.968999 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.971598 kubelet[3407]: E0117 00:02:56.969033 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.982851 kubelet[3407]: E0117 00:02:56.979593 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.982851 kubelet[3407]: W0117 00:02:56.979708 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.982851 kubelet[3407]: E0117 00:02:56.979743 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.992143 kubelet[3407]: E0117 00:02:56.992105 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.993817 kubelet[3407]: W0117 00:02:56.993703 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.993817 kubelet[3407]: E0117 00:02:56.993757 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.997430 kubelet[3407]: E0117 00:02:56.996433 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.997430 kubelet[3407]: W0117 00:02:56.996470 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.997430 kubelet[3407]: E0117 00:02:56.996504 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.998471 kubelet[3407]: E0117 00:02:56.998315 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.998471 kubelet[3407]: W0117 00:02:56.998355 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.998471 kubelet[3407]: E0117 00:02:56.998431 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:56.999066 kubelet[3407]: E0117 00:02:56.999027 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:56.999066 kubelet[3407]: W0117 00:02:56.999059 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:56.999219 kubelet[3407]: E0117 00:02:56.999086 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.001214 kubelet[3407]: E0117 00:02:57.001158 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.001214 kubelet[3407]: W0117 00:02:57.001191 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.001387 kubelet[3407]: E0117 00:02:57.001239 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.001936 kubelet[3407]: E0117 00:02:57.001759 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.001936 kubelet[3407]: W0117 00:02:57.001930 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.002074 kubelet[3407]: E0117 00:02:57.001956 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.003950 kubelet[3407]: E0117 00:02:57.003733 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.003950 kubelet[3407]: W0117 00:02:57.003772 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.003950 kubelet[3407]: E0117 00:02:57.003808 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.004465 kubelet[3407]: E0117 00:02:57.004131 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.004465 kubelet[3407]: W0117 00:02:57.004147 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.004465 kubelet[3407]: E0117 00:02:57.004167 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.005723 kubelet[3407]: E0117 00:02:57.005158 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.005723 kubelet[3407]: W0117 00:02:57.005217 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.005723 kubelet[3407]: E0117 00:02:57.005253 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.007256 kubelet[3407]: E0117 00:02:57.006563 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.007256 kubelet[3407]: W0117 00:02:57.006627 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.007256 kubelet[3407]: E0117 00:02:57.006661 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.008859 kubelet[3407]: E0117 00:02:57.007887 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.008859 kubelet[3407]: W0117 00:02:57.007932 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.008859 kubelet[3407]: E0117 00:02:57.007970 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.010049 kubelet[3407]: E0117 00:02:57.009320 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.010049 kubelet[3407]: W0117 00:02:57.009360 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.010049 kubelet[3407]: E0117 00:02:57.009395 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.014555 kubelet[3407]: E0117 00:02:57.013177 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.014555 kubelet[3407]: W0117 00:02:57.014368 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.014555 kubelet[3407]: E0117 00:02:57.014453 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.016258 kubelet[3407]: E0117 00:02:57.015626 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.016258 kubelet[3407]: W0117 00:02:57.015672 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.016258 kubelet[3407]: E0117 00:02:57.015706 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.017580 kubelet[3407]: E0117 00:02:57.017532 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.017580 kubelet[3407]: W0117 00:02:57.017568 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.021297 kubelet[3407]: E0117 00:02:57.020254 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.023618 kubelet[3407]: E0117 00:02:57.021464 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.023618 kubelet[3407]: W0117 00:02:57.021499 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.023618 kubelet[3407]: E0117 00:02:57.021534 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.025269 kubelet[3407]: E0117 00:02:57.024441 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.025431 kubelet[3407]: W0117 00:02:57.025279 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.025431 kubelet[3407]: E0117 00:02:57.025318 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.032885 kubelet[3407]: E0117 00:02:57.032685 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.032885 kubelet[3407]: W0117 00:02:57.032723 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.032885 kubelet[3407]: E0117 00:02:57.032777 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.033466 kubelet[3407]: E0117 00:02:57.033275 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.033466 kubelet[3407]: W0117 00:02:57.033306 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.033466 kubelet[3407]: E0117 00:02:57.033333 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.035453 kubelet[3407]: E0117 00:02:57.034771 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.035453 kubelet[3407]: W0117 00:02:57.034923 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.035453 kubelet[3407]: E0117 00:02:57.034956 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.037688 kubelet[3407]: E0117 00:02:57.037317 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.037688 kubelet[3407]: W0117 00:02:57.037361 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.037688 kubelet[3407]: E0117 00:02:57.037432 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.039378 kubelet[3407]: E0117 00:02:57.038679 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.039378 kubelet[3407]: W0117 00:02:57.038722 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.039378 kubelet[3407]: E0117 00:02:57.038771 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.039378 kubelet[3407]: I0117 00:02:57.038815 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e8ea394-25e8-46d5-8e69-e40f87a471c2-kubelet-dir\") pod \"csi-node-driver-jl689\" (UID: \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\") " pod="calico-system/csi-node-driver-jl689" Jan 17 00:02:57.042406 kubelet[3407]: E0117 00:02:57.041595 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.042406 kubelet[3407]: W0117 00:02:57.041654 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.042406 kubelet[3407]: E0117 00:02:57.042100 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.042406 kubelet[3407]: W0117 00:02:57.042118 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.043264 kubelet[3407]: E0117 00:02:57.042541 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.043264 kubelet[3407]: I0117 00:02:57.042595 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e8ea394-25e8-46d5-8e69-e40f87a471c2-registration-dir\") pod \"csi-node-driver-jl689\" (UID: \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\") " pod="calico-system/csi-node-driver-jl689" Jan 17 00:02:57.043264 kubelet[3407]: E0117 00:02:57.042643 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.043264 kubelet[3407]: E0117 00:02:57.042761 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.043264 kubelet[3407]: W0117 00:02:57.042793 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.043264 kubelet[3407]: E0117 00:02:57.042819 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.044933 kubelet[3407]: E0117 00:02:57.044013 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.044933 kubelet[3407]: W0117 00:02:57.044053 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.045653 kubelet[3407]: E0117 00:02:57.045394 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.046293 kubelet[3407]: E0117 00:02:57.046246 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.046619 kubelet[3407]: W0117 00:02:57.046285 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.046876 kubelet[3407]: E0117 00:02:57.046735 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.046876 kubelet[3407]: I0117 00:02:57.046809 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e8ea394-25e8-46d5-8e69-e40f87a471c2-socket-dir\") pod \"csi-node-driver-jl689\" (UID: \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\") " pod="calico-system/csi-node-driver-jl689" Jan 17 00:02:57.049984 kubelet[3407]: E0117 00:02:57.048984 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.049984 kubelet[3407]: W0117 00:02:57.049228 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.049984 kubelet[3407]: E0117 00:02:57.049303 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.049302 systemd[1]: Started cri-containerd-8e3047bf7bb6f7e5db38d49019d3ce6e41eccd345ba4004ac5913c2b189a9a06.scope - libcontainer container 8e3047bf7bb6f7e5db38d49019d3ce6e41eccd345ba4004ac5913c2b189a9a06. Jan 17 00:02:57.052066 kubelet[3407]: E0117 00:02:57.051763 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.052066 kubelet[3407]: W0117 00:02:57.051791 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.054250 kubelet[3407]: E0117 00:02:57.052385 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.054250 kubelet[3407]: W0117 00:02:57.052436 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.054250 kubelet[3407]: E0117 00:02:57.052719 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.054250 kubelet[3407]: E0117 00:02:57.052762 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.054250 kubelet[3407]: I0117 00:02:57.052803 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmhd6\" (UniqueName: \"kubernetes.io/projected/0e8ea394-25e8-46d5-8e69-e40f87a471c2-kube-api-access-vmhd6\") pod \"csi-node-driver-jl689\" (UID: \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\") " pod="calico-system/csi-node-driver-jl689" Jan 17 00:02:57.056462 kubelet[3407]: E0117 00:02:57.055544 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.056462 kubelet[3407]: W0117 00:02:57.055584 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.056462 kubelet[3407]: E0117 00:02:57.055643 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.056462 kubelet[3407]: E0117 00:02:57.056154 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.056462 kubelet[3407]: W0117 00:02:57.056175 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.058655 kubelet[3407]: E0117 00:02:57.058280 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.059627 kubelet[3407]: E0117 00:02:57.059593 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.060038 kubelet[3407]: W0117 00:02:57.059974 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.061179 kubelet[3407]: E0117 00:02:57.060853 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.061179 kubelet[3407]: I0117 00:02:57.060924 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e8ea394-25e8-46d5-8e69-e40f87a471c2-varrun\") pod \"csi-node-driver-jl689\" (UID: \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\") " pod="calico-system/csi-node-driver-jl689" Jan 17 00:02:57.062837 kubelet[3407]: E0117 00:02:57.062803 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.063788 kubelet[3407]: W0117 00:02:57.063752 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.064228 kubelet[3407]: E0117 00:02:57.064127 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.065462 kubelet[3407]: E0117 00:02:57.065338 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.065462 kubelet[3407]: W0117 00:02:57.065373 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.065462 kubelet[3407]: E0117 00:02:57.065421 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.066373 kubelet[3407]: E0117 00:02:57.066328 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.066373 kubelet[3407]: W0117 00:02:57.066365 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.066615 kubelet[3407]: E0117 00:02:57.066394 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.067940 kubelet[3407]: E0117 00:02:57.067884 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.067940 kubelet[3407]: W0117 00:02:57.067930 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.068144 kubelet[3407]: E0117 00:02:57.067963 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.113529 containerd[2019]: time="2026-01-17T00:02:57.112621840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6frmc,Uid:2f325039-35a3-48b4-a8e4-1286c1c1603a,Namespace:calico-system,Attempt:0,}" Jan 17 00:02:57.165722 kubelet[3407]: E0117 00:02:57.165372 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.165722 kubelet[3407]: W0117 00:02:57.165410 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.165722 kubelet[3407]: E0117 00:02:57.165452 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.167505 kubelet[3407]: E0117 00:02:57.167466 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.168222 kubelet[3407]: W0117 00:02:57.167747 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.168222 kubelet[3407]: E0117 00:02:57.167804 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.171894 kubelet[3407]: E0117 00:02:57.171547 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.171894 kubelet[3407]: W0117 00:02:57.171585 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.171894 kubelet[3407]: E0117 00:02:57.171651 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.173253 kubelet[3407]: E0117 00:02:57.172952 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.173949 kubelet[3407]: W0117 00:02:57.173186 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.173949 kubelet[3407]: E0117 00:02:57.173644 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.175738 kubelet[3407]: E0117 00:02:57.175465 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.175738 kubelet[3407]: W0117 00:02:57.175501 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.176962 kubelet[3407]: E0117 00:02:57.176714 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.178557 kubelet[3407]: E0117 00:02:57.177797 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.178557 kubelet[3407]: W0117 00:02:57.177829 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.179850 kubelet[3407]: E0117 00:02:57.179483 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.181923 kubelet[3407]: E0117 00:02:57.181466 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.181923 kubelet[3407]: W0117 00:02:57.181502 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.184701 kubelet[3407]: E0117 00:02:57.184002 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.187242 kubelet[3407]: E0117 00:02:57.186388 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.187649 kubelet[3407]: W0117 00:02:57.187417 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.187810 kubelet[3407]: E0117 00:02:57.187779 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.190631 kubelet[3407]: E0117 00:02:57.190396 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.190631 kubelet[3407]: W0117 00:02:57.190427 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.193795 kubelet[3407]: E0117 00:02:57.193750 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.197300 kubelet[3407]: E0117 00:02:57.194407 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.197300 kubelet[3407]: W0117 00:02:57.195727 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.197300 kubelet[3407]: E0117 00:02:57.195818 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.202260 kubelet[3407]: E0117 00:02:57.200397 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.202260 kubelet[3407]: W0117 00:02:57.200433 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.202260 kubelet[3407]: E0117 00:02:57.200590 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.202883 kubelet[3407]: E0117 00:02:57.202711 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.202883 kubelet[3407]: W0117 00:02:57.202743 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.202883 kubelet[3407]: E0117 00:02:57.202832 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.204110 kubelet[3407]: E0117 00:02:57.203903 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.204110 kubelet[3407]: W0117 00:02:57.203935 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.204110 kubelet[3407]: E0117 00:02:57.203997 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.204396 containerd[2019]: time="2026-01-17T00:02:57.201137896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:02:57.207255 kubelet[3407]: E0117 00:02:57.206099 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.207255 kubelet[3407]: W0117 00:02:57.206383 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.207255 kubelet[3407]: E0117 00:02:57.206458 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.207472 containerd[2019]: time="2026-01-17T00:02:57.206273068Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:02:57.207472 containerd[2019]: time="2026-01-17T00:02:57.206987728Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:57.209522 kubelet[3407]: E0117 00:02:57.208995 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.209522 kubelet[3407]: W0117 00:02:57.209031 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.209522 kubelet[3407]: E0117 00:02:57.209106 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.210819 containerd[2019]: time="2026-01-17T00:02:57.209339464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:02:57.211504 kubelet[3407]: E0117 00:02:57.211059 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.211504 kubelet[3407]: W0117 00:02:57.211091 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.211504 kubelet[3407]: E0117 00:02:57.211166 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.214654 kubelet[3407]: E0117 00:02:57.214350 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.214654 kubelet[3407]: W0117 00:02:57.214386 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.214857 kubelet[3407]: E0117 00:02:57.214671 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.216865 kubelet[3407]: E0117 00:02:57.216442 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.216865 kubelet[3407]: W0117 00:02:57.216478 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.216865 kubelet[3407]: E0117 00:02:57.216559 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.217680 kubelet[3407]: E0117 00:02:57.217464 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.217680 kubelet[3407]: W0117 00:02:57.217494 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.217680 kubelet[3407]: E0117 00:02:57.217674 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.218716 kubelet[3407]: E0117 00:02:57.218686 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.219019 kubelet[3407]: W0117 00:02:57.218831 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.219602 kubelet[3407]: E0117 00:02:57.219436 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.220425 kubelet[3407]: W0117 00:02:57.219776 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.220425 kubelet[3407]: E0117 00:02:57.219536 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.220425 kubelet[3407]: E0117 00:02:57.220339 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.221935 kubelet[3407]: E0117 00:02:57.221661 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.221935 kubelet[3407]: W0117 00:02:57.221697 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.222871 kubelet[3407]: E0117 00:02:57.222268 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.224400 kubelet[3407]: E0117 00:02:57.224359 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.224816 kubelet[3407]: W0117 00:02:57.224556 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.225106 kubelet[3407]: E0117 00:02:57.225077 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.225540 kubelet[3407]: E0117 00:02:57.225424 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.227268 kubelet[3407]: W0117 00:02:57.225694 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.227268 kubelet[3407]: E0117 00:02:57.225754 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.229694 kubelet[3407]: E0117 00:02:57.229565 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.229694 kubelet[3407]: W0117 00:02:57.229601 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.229694 kubelet[3407]: E0117 00:02:57.229634 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.280985 systemd[1]: Started cri-containerd-282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5.scope - libcontainer container 282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5. Jan 17 00:02:57.326247 kubelet[3407]: E0117 00:02:57.324417 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:02:57.326247 kubelet[3407]: W0117 00:02:57.324455 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:02:57.326247 kubelet[3407]: E0117 00:02:57.324489 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:02:57.426912 containerd[2019]: time="2026-01-17T00:02:57.425361425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6frmc,Uid:2f325039-35a3-48b4-a8e4-1286c1c1603a,Namespace:calico-system,Attempt:0,} returns sandbox id \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\"" Jan 17 00:02:57.433892 containerd[2019]: time="2026-01-17T00:02:57.433038245Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:02:57.607833 containerd[2019]: time="2026-01-17T00:02:57.607734138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5976586b5-qjgrm,Uid:055e7d0a-47f0-4288-9ef4-6ed45dab0cf3,Namespace:calico-system,Attempt:0,} returns sandbox id \"8e3047bf7bb6f7e5db38d49019d3ce6e41eccd345ba4004ac5913c2b189a9a06\"" Jan 17 00:02:58.509547 kubelet[3407]: E0117 00:02:58.509286 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:02:58.620406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2398426842.mount: Deactivated successfully. Jan 17 00:02:58.765397 containerd[2019]: time="2026-01-17T00:02:58.764344232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:58.766402 containerd[2019]: time="2026-01-17T00:02:58.766173152Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=5636570" Jan 17 00:02:58.768324 containerd[2019]: time="2026-01-17T00:02:58.767817848Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:58.772805 containerd[2019]: time="2026-01-17T00:02:58.772729484Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:02:58.773669 containerd[2019]: time="2026-01-17T00:02:58.773605472Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.340491387s" Jan 17 00:02:58.773785 containerd[2019]: time="2026-01-17T00:02:58.773666084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:02:58.781737 containerd[2019]: time="2026-01-17T00:02:58.781611944Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:02:58.789502 containerd[2019]: time="2026-01-17T00:02:58.789431900Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:02:58.816850 containerd[2019]: time="2026-01-17T00:02:58.816778532Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de\"" Jan 17 00:02:58.819272 containerd[2019]: time="2026-01-17T00:02:58.817880588Z" level=info msg="StartContainer for \"b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de\"" Jan 17 00:02:58.880949 systemd[1]: Started cri-containerd-b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de.scope - libcontainer container b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de. Jan 17 00:02:58.933432 containerd[2019]: time="2026-01-17T00:02:58.933364641Z" level=info msg="StartContainer for \"b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de\" returns successfully" Jan 17 00:02:58.966467 systemd[1]: cri-containerd-b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de.scope: Deactivated successfully. Jan 17 00:02:59.127963 containerd[2019]: time="2026-01-17T00:02:59.127877598Z" level=info msg="shim disconnected" id=b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de namespace=k8s.io Jan 17 00:02:59.128598 containerd[2019]: time="2026-01-17T00:02:59.128321082Z" level=warning msg="cleaning up after shim disconnected" id=b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de namespace=k8s.io Jan 17 00:02:59.128598 containerd[2019]: time="2026-01-17T00:02:59.128352918Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:59.809960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9faf20369baaf0a0b096b12548e0c8a269dc80b78fcd1a0b8ddbf9f526891de-rootfs.mount: Deactivated successfully. Jan 17 00:03:00.509268 kubelet[3407]: E0117 00:03:00.509190 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:00.701498 containerd[2019]: time="2026-01-17T00:03:00.701424478Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:00.703075 containerd[2019]: time="2026-01-17T00:03:00.702939226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31720858" Jan 17 00:03:00.706263 containerd[2019]: time="2026-01-17T00:03:00.704342554Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:00.710507 containerd[2019]: time="2026-01-17T00:03:00.710352070Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:00.711976 containerd[2019]: time="2026-01-17T00:03:00.711913090Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.930189798s" Jan 17 00:03:00.712169 containerd[2019]: time="2026-01-17T00:03:00.712135570Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:03:00.716533 containerd[2019]: time="2026-01-17T00:03:00.716483518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:03:00.742588 containerd[2019]: time="2026-01-17T00:03:00.742535530Z" level=info msg="CreateContainer within sandbox \"8e3047bf7bb6f7e5db38d49019d3ce6e41eccd345ba4004ac5913c2b189a9a06\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:03:00.768606 containerd[2019]: time="2026-01-17T00:03:00.768464758Z" level=info msg="CreateContainer within sandbox \"8e3047bf7bb6f7e5db38d49019d3ce6e41eccd345ba4004ac5913c2b189a9a06\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"689567295b9b68a382af6b35e986a6ceb9caf244863a72e3f31cfb60cdf48d42\"" Jan 17 00:03:00.773430 containerd[2019]: time="2026-01-17T00:03:00.773080330Z" level=info msg="StartContainer for \"689567295b9b68a382af6b35e986a6ceb9caf244863a72e3f31cfb60cdf48d42\"" Jan 17 00:03:00.833583 systemd[1]: Started cri-containerd-689567295b9b68a382af6b35e986a6ceb9caf244863a72e3f31cfb60cdf48d42.scope - libcontainer container 689567295b9b68a382af6b35e986a6ceb9caf244863a72e3f31cfb60cdf48d42. Jan 17 00:03:00.906701 containerd[2019]: time="2026-01-17T00:03:00.906587807Z" level=info msg="StartContainer for \"689567295b9b68a382af6b35e986a6ceb9caf244863a72e3f31cfb60cdf48d42\" returns successfully" Jan 17 00:03:01.818157 kubelet[3407]: I0117 00:03:01.816467 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5976586b5-qjgrm" podStartSLOduration=2.712151359 podStartE2EDuration="5.816438407s" podCreationTimestamp="2026-01-17 00:02:56 +0000 UTC" firstStartedPulling="2026-01-17 00:02:57.61081047 +0000 UTC m=+34.333721331" lastFinishedPulling="2026-01-17 00:03:00.715097518 +0000 UTC m=+37.438008379" observedRunningTime="2026-01-17 00:03:01.793632347 +0000 UTC m=+38.516543232" watchObservedRunningTime="2026-01-17 00:03:01.816438407 +0000 UTC m=+38.539349304" Jan 17 00:03:02.510442 kubelet[3407]: E0117 00:03:02.508981 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:03.650491 containerd[2019]: time="2026-01-17T00:03:03.650419860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:03.654158 containerd[2019]: time="2026-01-17T00:03:03.653719164Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:03:03.654862 containerd[2019]: time="2026-01-17T00:03:03.654709188Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:03.662475 containerd[2019]: time="2026-01-17T00:03:03.662421840Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:03.666021 containerd[2019]: time="2026-01-17T00:03:03.665830656Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.94900923s" Jan 17 00:03:03.666021 containerd[2019]: time="2026-01-17T00:03:03.665885448Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:03:03.670788 containerd[2019]: time="2026-01-17T00:03:03.670502316Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:03:03.696534 containerd[2019]: time="2026-01-17T00:03:03.696323628Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad\"" Jan 17 00:03:03.697993 containerd[2019]: time="2026-01-17T00:03:03.697389732Z" level=info msg="StartContainer for \"59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad\"" Jan 17 00:03:03.772571 systemd[1]: Started cri-containerd-59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad.scope - libcontainer container 59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad. Jan 17 00:03:03.828312 containerd[2019]: time="2026-01-17T00:03:03.828101761Z" level=info msg="StartContainer for \"59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad\" returns successfully" Jan 17 00:03:04.509999 kubelet[3407]: E0117 00:03:04.509532 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:05.077081 containerd[2019]: time="2026-01-17T00:03:05.076975427Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 00:03:05.082306 systemd[1]: cri-containerd-59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad.scope: Deactivated successfully. Jan 17 00:03:05.097036 kubelet[3407]: I0117 00:03:05.096997 3407 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:03:05.140638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad-rootfs.mount: Deactivated successfully. Jan 17 00:03:05.188817 systemd[1]: Created slice kubepods-burstable-pod100157b3_6e13_496d_9d2a_b11a40a79c18.slice - libcontainer container kubepods-burstable-pod100157b3_6e13_496d_9d2a_b11a40a79c18.slice. Jan 17 00:03:05.208103 systemd[1]: Created slice kubepods-besteffort-podcecd0bc0_a5de_49ac_853f_0e0f9c309bd4.slice - libcontainer container kubepods-besteffort-podcecd0bc0_a5de_49ac_853f_0e0f9c309bd4.slice. Jan 17 00:03:05.239134 systemd[1]: Created slice kubepods-burstable-pod3bc218c2_324a_4549_a1e2_ab9fb6d1d96d.slice - libcontainer container kubepods-burstable-pod3bc218c2_324a_4549_a1e2_ab9fb6d1d96d.slice. Jan 17 00:03:05.287254 kubelet[3407]: I0117 00:03:05.285385 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/cecd0bc0-a5de-49ac-853f-0e0f9c309bd4-calico-apiserver-certs\") pod \"calico-apiserver-76869b6969-b9cdk\" (UID: \"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4\") " pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" Jan 17 00:03:05.287254 kubelet[3407]: I0117 00:03:05.285467 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhg7j\" (UniqueName: \"kubernetes.io/projected/100157b3-6e13-496d-9d2a-b11a40a79c18-kube-api-access-zhg7j\") pod \"coredns-668d6bf9bc-p9b6s\" (UID: \"100157b3-6e13-496d-9d2a-b11a40a79c18\") " pod="kube-system/coredns-668d6bf9bc-p9b6s" Jan 17 00:03:05.287254 kubelet[3407]: I0117 00:03:05.285506 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xfn9w\" (UniqueName: \"kubernetes.io/projected/cecd0bc0-a5de-49ac-853f-0e0f9c309bd4-kube-api-access-xfn9w\") pod \"calico-apiserver-76869b6969-b9cdk\" (UID: \"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4\") " pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" Jan 17 00:03:05.287254 kubelet[3407]: I0117 00:03:05.285564 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/100157b3-6e13-496d-9d2a-b11a40a79c18-config-volume\") pod \"coredns-668d6bf9bc-p9b6s\" (UID: \"100157b3-6e13-496d-9d2a-b11a40a79c18\") " pod="kube-system/coredns-668d6bf9bc-p9b6s" Jan 17 00:03:05.306307 systemd[1]: Created slice kubepods-besteffort-pod98e123b4_3ef3_4dbb_b304_2875273a6844.slice - libcontainer container kubepods-besteffort-pod98e123b4_3ef3_4dbb_b304_2875273a6844.slice. Jan 17 00:03:05.322411 systemd[1]: Created slice kubepods-besteffort-pod2760346a_cdd2_4959_9cca_5bf87123f24a.slice - libcontainer container kubepods-besteffort-pod2760346a_cdd2_4959_9cca_5bf87123f24a.slice. Jan 17 00:03:05.353824 systemd[1]: Created slice kubepods-besteffort-podb69cd1ae_f3f7_4ea4_9e97_d8c762b48c31.slice - libcontainer container kubepods-besteffort-podb69cd1ae_f3f7_4ea4_9e97_d8c762b48c31.slice. Jan 17 00:03:05.367865 systemd[1]: Created slice kubepods-besteffort-pod78c76606_d1d8_421b_ba4e_8cdbad81bc9c.slice - libcontainer container kubepods-besteffort-pod78c76606_d1d8_421b_ba4e_8cdbad81bc9c.slice. Jan 17 00:03:05.386064 kubelet[3407]: I0117 00:03:05.386008 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6bng\" (UniqueName: \"kubernetes.io/projected/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-kube-api-access-q6bng\") pod \"whisker-75b94bdbb6-5dd2v\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " pod="calico-system/whisker-75b94bdbb6-5dd2v" Jan 17 00:03:05.386966 kubelet[3407]: I0117 00:03:05.386935 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bc218c2-324a-4549-a1e2-ab9fb6d1d96d-config-volume\") pod \"coredns-668d6bf9bc-982qf\" (UID: \"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d\") " pod="kube-system/coredns-668d6bf9bc-982qf" Jan 17 00:03:05.387613 kubelet[3407]: I0117 00:03:05.387578 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/98e123b4-3ef3-4dbb-b304-2875273a6844-calico-apiserver-certs\") pod \"calico-apiserver-76869b6969-4smvz\" (UID: \"98e123b4-3ef3-4dbb-b304-2875273a6844\") " pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" Jan 17 00:03:05.387816 kubelet[3407]: I0117 00:03:05.387790 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/2760346a-cdd2-4959-9cca-5bf87123f24a-goldmane-key-pair\") pod \"goldmane-666569f655-dfrr7\" (UID: \"2760346a-cdd2-4959-9cca-5bf87123f24a\") " pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:05.387962 kubelet[3407]: I0117 00:03:05.387935 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760346a-cdd2-4959-9cca-5bf87123f24a-config\") pod \"goldmane-666569f655-dfrr7\" (UID: \"2760346a-cdd2-4959-9cca-5bf87123f24a\") " pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:05.388245 kubelet[3407]: I0117 00:03:05.388139 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-backend-key-pair\") pod \"whisker-75b94bdbb6-5dd2v\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " pod="calico-system/whisker-75b94bdbb6-5dd2v" Jan 17 00:03:05.388245 kubelet[3407]: I0117 00:03:05.388178 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2760346a-cdd2-4959-9cca-5bf87123f24a-goldmane-ca-bundle\") pod \"goldmane-666569f655-dfrr7\" (UID: \"2760346a-cdd2-4959-9cca-5bf87123f24a\") " pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:05.388550 kubelet[3407]: I0117 00:03:05.388488 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbc5q\" (UniqueName: \"kubernetes.io/projected/3bc218c2-324a-4549-a1e2-ab9fb6d1d96d-kube-api-access-tbc5q\") pod \"coredns-668d6bf9bc-982qf\" (UID: \"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d\") " pod="kube-system/coredns-668d6bf9bc-982qf" Jan 17 00:03:05.388730 kubelet[3407]: I0117 00:03:05.388690 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lx5xv\" (UniqueName: \"kubernetes.io/projected/98e123b4-3ef3-4dbb-b304-2875273a6844-kube-api-access-lx5xv\") pod \"calico-apiserver-76869b6969-4smvz\" (UID: \"98e123b4-3ef3-4dbb-b304-2875273a6844\") " pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" Jan 17 00:03:05.389066 kubelet[3407]: I0117 00:03:05.388905 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31-tigera-ca-bundle\") pod \"calico-kube-controllers-67f478bb65-pq6fw\" (UID: \"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31\") " pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" Jan 17 00:03:05.396353 kubelet[3407]: I0117 00:03:05.395912 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-ca-bundle\") pod \"whisker-75b94bdbb6-5dd2v\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " pod="calico-system/whisker-75b94bdbb6-5dd2v" Jan 17 00:03:05.396353 kubelet[3407]: I0117 00:03:05.396215 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t94sc\" (UniqueName: \"kubernetes.io/projected/2760346a-cdd2-4959-9cca-5bf87123f24a-kube-api-access-t94sc\") pod \"goldmane-666569f655-dfrr7\" (UID: \"2760346a-cdd2-4959-9cca-5bf87123f24a\") " pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:05.397244 kubelet[3407]: I0117 00:03:05.396924 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmds6\" (UniqueName: \"kubernetes.io/projected/b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31-kube-api-access-mmds6\") pod \"calico-kube-controllers-67f478bb65-pq6fw\" (UID: \"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31\") " pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" Jan 17 00:03:05.500408 containerd[2019]: time="2026-01-17T00:03:05.499909993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9b6s,Uid:100157b3-6e13-496d-9d2a-b11a40a79c18,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:05.531995 containerd[2019]: time="2026-01-17T00:03:05.531883826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-b9cdk,Uid:cecd0bc0-a5de-49ac-853f-0e0f9c309bd4,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:03:05.594896 containerd[2019]: time="2026-01-17T00:03:05.594830174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982qf,Uid:3bc218c2-324a-4549-a1e2-ab9fb6d1d96d,Namespace:kube-system,Attempt:0,}" Jan 17 00:03:05.617845 containerd[2019]: time="2026-01-17T00:03:05.616797974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-4smvz,Uid:98e123b4-3ef3-4dbb-b304-2875273a6844,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:03:05.631469 containerd[2019]: time="2026-01-17T00:03:05.631379558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dfrr7,Uid:2760346a-cdd2-4959-9cca-5bf87123f24a,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:05.664301 containerd[2019]: time="2026-01-17T00:03:05.663920546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f478bb65-pq6fw,Uid:b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:05.683827 containerd[2019]: time="2026-01-17T00:03:05.683779250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b94bdbb6-5dd2v,Uid:78c76606-d1d8-421b-ba4e-8cdbad81bc9c,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:06.020597 containerd[2019]: time="2026-01-17T00:03:06.020296080Z" level=info msg="shim disconnected" id=59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad namespace=k8s.io Jan 17 00:03:06.020597 containerd[2019]: time="2026-01-17T00:03:06.020370804Z" level=warning msg="cleaning up after shim disconnected" id=59b64848f47ba9d8e610f947ddcb12f231157785b481424f4dc6e74f889831ad namespace=k8s.io Jan 17 00:03:06.020597 containerd[2019]: time="2026-01-17T00:03:06.020391348Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:03:06.528227 systemd[1]: Created slice kubepods-besteffort-pod0e8ea394_25e8_46d5_8e69_e40f87a471c2.slice - libcontainer container kubepods-besteffort-pod0e8ea394_25e8_46d5_8e69_e40f87a471c2.slice. Jan 17 00:03:06.535559 containerd[2019]: time="2026-01-17T00:03:06.534888015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl689,Uid:0e8ea394-25e8-46d5-8e69-e40f87a471c2,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:06.548134 containerd[2019]: time="2026-01-17T00:03:06.548068947Z" level=error msg="Failed to destroy network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.551990 containerd[2019]: time="2026-01-17T00:03:06.551560851Z" level=error msg="encountered an error cleaning up failed sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.553478 containerd[2019]: time="2026-01-17T00:03:06.552404067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-b9cdk,Uid:cecd0bc0-a5de-49ac-853f-0e0f9c309bd4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.554772 kubelet[3407]: E0117 00:03:06.554702 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.555719 kubelet[3407]: E0117 00:03:06.554813 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" Jan 17 00:03:06.555719 kubelet[3407]: E0117 00:03:06.554848 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" Jan 17 00:03:06.556299 containerd[2019]: time="2026-01-17T00:03:06.556020963Z" level=error msg="Failed to destroy network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.557025 containerd[2019]: time="2026-01-17T00:03:06.556757139Z" level=error msg="encountered an error cleaning up failed sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.557025 containerd[2019]: time="2026-01-17T00:03:06.556842099Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-75b94bdbb6-5dd2v,Uid:78c76606-d1d8-421b-ba4e-8cdbad81bc9c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.559344 kubelet[3407]: E0117 00:03:06.558790 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.559344 kubelet[3407]: E0117 00:03:06.559157 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:06.559623 kubelet[3407]: E0117 00:03:06.559445 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b94bdbb6-5dd2v" Jan 17 00:03:06.560028 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50-shm.mount: Deactivated successfully. Jan 17 00:03:06.561831 kubelet[3407]: E0117 00:03:06.559483 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-75b94bdbb6-5dd2v" Jan 17 00:03:06.562439 kubelet[3407]: E0117 00:03:06.562070 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-75b94bdbb6-5dd2v_calico-system(78c76606-d1d8-421b-ba4e-8cdbad81bc9c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-75b94bdbb6-5dd2v_calico-system(78c76606-d1d8-421b-ba4e-8cdbad81bc9c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75b94bdbb6-5dd2v" podUID="78c76606-d1d8-421b-ba4e-8cdbad81bc9c" Jan 17 00:03:06.567948 containerd[2019]: time="2026-01-17T00:03:06.567887607Z" level=error msg="Failed to destroy network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.570307 containerd[2019]: time="2026-01-17T00:03:06.568828155Z" level=error msg="encountered an error cleaning up failed sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.570940 containerd[2019]: time="2026-01-17T00:03:06.570524847Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f478bb65-pq6fw,Uid:b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.575161 kubelet[3407]: E0117 00:03:06.575109 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.575902 kubelet[3407]: E0117 00:03:06.575694 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" Jan 17 00:03:06.575902 kubelet[3407]: E0117 00:03:06.575753 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" Jan 17 00:03:06.575902 kubelet[3407]: E0117 00:03:06.575828 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:06.577689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace-shm.mount: Deactivated successfully. Jan 17 00:03:06.586159 containerd[2019]: time="2026-01-17T00:03:06.586085631Z" level=error msg="Failed to destroy network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.587041 containerd[2019]: time="2026-01-17T00:03:06.586793307Z" level=error msg="encountered an error cleaning up failed sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.587041 containerd[2019]: time="2026-01-17T00:03:06.586889055Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982qf,Uid:3bc218c2-324a-4549-a1e2-ab9fb6d1d96d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.587939 kubelet[3407]: E0117 00:03:06.587696 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.587939 kubelet[3407]: E0117 00:03:06.587774 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-982qf" Jan 17 00:03:06.587939 kubelet[3407]: E0117 00:03:06.587812 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-982qf" Jan 17 00:03:06.588175 kubelet[3407]: E0117 00:03:06.587872 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-982qf_kube-system(3bc218c2-324a-4549-a1e2-ab9fb6d1d96d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-982qf_kube-system(3bc218c2-324a-4549-a1e2-ab9fb6d1d96d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-982qf" podUID="3bc218c2-324a-4549-a1e2-ab9fb6d1d96d" Jan 17 00:03:06.604177 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902-shm.mount: Deactivated successfully. Jan 17 00:03:06.619164 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22-shm.mount: Deactivated successfully. Jan 17 00:03:06.628645 containerd[2019]: time="2026-01-17T00:03:06.628416639Z" level=error msg="Failed to destroy network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.630768 containerd[2019]: time="2026-01-17T00:03:06.630406191Z" level=error msg="encountered an error cleaning up failed sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.630768 containerd[2019]: time="2026-01-17T00:03:06.630511011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9b6s,Uid:100157b3-6e13-496d-9d2a-b11a40a79c18,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.631094 kubelet[3407]: E0117 00:03:06.630780 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.631094 kubelet[3407]: E0117 00:03:06.630851 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p9b6s" Jan 17 00:03:06.631094 kubelet[3407]: E0117 00:03:06.630892 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p9b6s" Jan 17 00:03:06.631419 kubelet[3407]: E0117 00:03:06.630967 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p9b6s_kube-system(100157b3-6e13-496d-9d2a-b11a40a79c18)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p9b6s_kube-system(100157b3-6e13-496d-9d2a-b11a40a79c18)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p9b6s" podUID="100157b3-6e13-496d-9d2a-b11a40a79c18" Jan 17 00:03:06.651046 containerd[2019]: time="2026-01-17T00:03:06.650678775Z" level=error msg="Failed to destroy network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.654410 containerd[2019]: time="2026-01-17T00:03:06.654329811Z" level=error msg="encountered an error cleaning up failed sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.654570 containerd[2019]: time="2026-01-17T00:03:06.654455187Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dfrr7,Uid:2760346a-cdd2-4959-9cca-5bf87123f24a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.655500 kubelet[3407]: E0117 00:03:06.654806 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.655500 kubelet[3407]: E0117 00:03:06.654879 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:06.655500 kubelet[3407]: E0117 00:03:06.654917 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-dfrr7" Jan 17 00:03:06.656446 kubelet[3407]: E0117 00:03:06.654987 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:06.666184 containerd[2019]: time="2026-01-17T00:03:06.666100383Z" level=error msg="Failed to destroy network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.666754 containerd[2019]: time="2026-01-17T00:03:06.666704043Z" level=error msg="encountered an error cleaning up failed sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.666904 containerd[2019]: time="2026-01-17T00:03:06.666787383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-4smvz,Uid:98e123b4-3ef3-4dbb-b304-2875273a6844,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.667131 kubelet[3407]: E0117 00:03:06.667072 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.667252 kubelet[3407]: E0117 00:03:06.667155 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" Jan 17 00:03:06.667252 kubelet[3407]: E0117 00:03:06.667191 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" Jan 17 00:03:06.667384 kubelet[3407]: E0117 00:03:06.667281 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:06.740275 containerd[2019]: time="2026-01-17T00:03:06.740172760Z" level=error msg="Failed to destroy network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.740910 containerd[2019]: time="2026-01-17T00:03:06.740842756Z" level=error msg="encountered an error cleaning up failed sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.741017 containerd[2019]: time="2026-01-17T00:03:06.740933992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl689,Uid:0e8ea394-25e8-46d5-8e69-e40f87a471c2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.741474 kubelet[3407]: E0117 00:03:06.741420 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:06.741560 kubelet[3407]: E0117 00:03:06.741519 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl689" Jan 17 00:03:06.741752 kubelet[3407]: E0117 00:03:06.741578 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jl689" Jan 17 00:03:06.741752 kubelet[3407]: E0117 00:03:06.741698 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:06.799451 kubelet[3407]: I0117 00:03:06.799312 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:06.802963 containerd[2019]: time="2026-01-17T00:03:06.802277656Z" level=info msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" Jan 17 00:03:06.802963 containerd[2019]: time="2026-01-17T00:03:06.802586776Z" level=info msg="Ensure that sandbox a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e in task-service has been cleanup successfully" Jan 17 00:03:06.809593 kubelet[3407]: I0117 00:03:06.809535 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:06.812115 containerd[2019]: time="2026-01-17T00:03:06.810849028Z" level=info msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" Jan 17 00:03:06.812115 containerd[2019]: time="2026-01-17T00:03:06.811344748Z" level=info msg="Ensure that sandbox cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902 in task-service has been cleanup successfully" Jan 17 00:03:06.816252 kubelet[3407]: I0117 00:03:06.816124 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:06.818561 containerd[2019]: time="2026-01-17T00:03:06.818479216Z" level=info msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" Jan 17 00:03:06.820900 containerd[2019]: time="2026-01-17T00:03:06.820726720Z" level=info msg="Ensure that sandbox 6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22 in task-service has been cleanup successfully" Jan 17 00:03:06.830221 kubelet[3407]: I0117 00:03:06.830146 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:06.832806 containerd[2019]: time="2026-01-17T00:03:06.832143100Z" level=info msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" Jan 17 00:03:06.836470 containerd[2019]: time="2026-01-17T00:03:06.836415076Z" level=info msg="Ensure that sandbox eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace in task-service has been cleanup successfully" Jan 17 00:03:06.841240 kubelet[3407]: I0117 00:03:06.841051 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:06.844600 containerd[2019]: time="2026-01-17T00:03:06.844543684Z" level=info msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" Jan 17 00:03:06.847590 containerd[2019]: time="2026-01-17T00:03:06.847523668Z" level=info msg="Ensure that sandbox b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd in task-service has been cleanup successfully" Jan 17 00:03:06.852825 kubelet[3407]: I0117 00:03:06.852006 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:06.855170 containerd[2019]: time="2026-01-17T00:03:06.853407940Z" level=info msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" Jan 17 00:03:06.857542 containerd[2019]: time="2026-01-17T00:03:06.857261176Z" level=info msg="Ensure that sandbox 68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543 in task-service has been cleanup successfully" Jan 17 00:03:06.865506 kubelet[3407]: I0117 00:03:06.865324 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:06.873928 containerd[2019]: time="2026-01-17T00:03:06.873471196Z" level=info msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" Jan 17 00:03:06.879083 containerd[2019]: time="2026-01-17T00:03:06.878390860Z" level=info msg="Ensure that sandbox 3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50 in task-service has been cleanup successfully" Jan 17 00:03:06.899409 containerd[2019]: time="2026-01-17T00:03:06.899061736Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:03:06.909740 kubelet[3407]: I0117 00:03:06.909466 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:06.918271 containerd[2019]: time="2026-01-17T00:03:06.917694664Z" level=info msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" Jan 17 00:03:06.920044 containerd[2019]: time="2026-01-17T00:03:06.919993996Z" level=info msg="Ensure that sandbox feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b in task-service has been cleanup successfully" Jan 17 00:03:07.069041 containerd[2019]: time="2026-01-17T00:03:07.068946481Z" level=error msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" failed" error="failed to destroy network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.070305 kubelet[3407]: E0117 00:03:07.070191 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:07.070495 kubelet[3407]: E0117 00:03:07.070315 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22"} Jan 17 00:03:07.070495 kubelet[3407]: E0117 00:03:07.070400 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.070495 kubelet[3407]: E0117 00:03:07.070449 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-982qf" podUID="3bc218c2-324a-4549-a1e2-ab9fb6d1d96d" Jan 17 00:03:07.072664 containerd[2019]: time="2026-01-17T00:03:07.072583585Z" level=error msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" failed" error="failed to destroy network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.073475 kubelet[3407]: E0117 00:03:07.072908 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:07.073475 kubelet[3407]: E0117 00:03:07.072981 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e"} Jan 17 00:03:07.073475 kubelet[3407]: E0117 00:03:07.073034 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.073475 kubelet[3407]: E0117 00:03:07.073072 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0e8ea394-25e8-46d5-8e69-e40f87a471c2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:07.096471 containerd[2019]: time="2026-01-17T00:03:07.096393121Z" level=error msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" failed" error="failed to destroy network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.097401 containerd[2019]: time="2026-01-17T00:03:07.096867529Z" level=error msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" failed" error="failed to destroy network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.097493 kubelet[3407]: E0117 00:03:07.096726 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:07.097493 kubelet[3407]: E0117 00:03:07.096791 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902"} Jan 17 00:03:07.097493 kubelet[3407]: E0117 00:03:07.096856 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.097493 kubelet[3407]: E0117 00:03:07.096895 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:07.097820 kubelet[3407]: E0117 00:03:07.097274 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:07.097820 kubelet[3407]: E0117 00:03:07.097323 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543"} Jan 17 00:03:07.097820 kubelet[3407]: E0117 00:03:07.097370 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"98e123b4-3ef3-4dbb-b304-2875273a6844\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.097820 kubelet[3407]: E0117 00:03:07.097410 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"98e123b4-3ef3-4dbb-b304-2875273a6844\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:07.107843 containerd[2019]: time="2026-01-17T00:03:07.105680125Z" level=error msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" failed" error="failed to destroy network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.108039 kubelet[3407]: E0117 00:03:07.106078 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:07.108039 kubelet[3407]: E0117 00:03:07.106746 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace"} Jan 17 00:03:07.108039 kubelet[3407]: E0117 00:03:07.106843 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.108039 kubelet[3407]: E0117 00:03:07.107157 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-75b94bdbb6-5dd2v" podUID="78c76606-d1d8-421b-ba4e-8cdbad81bc9c" Jan 17 00:03:07.114894 containerd[2019]: time="2026-01-17T00:03:07.114791149Z" level=error msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" failed" error="failed to destroy network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.115587 kubelet[3407]: E0117 00:03:07.115328 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:07.115587 kubelet[3407]: E0117 00:03:07.115403 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd"} Jan 17 00:03:07.115587 kubelet[3407]: E0117 00:03:07.115459 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2760346a-cdd2-4959-9cca-5bf87123f24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.115587 kubelet[3407]: E0117 00:03:07.115516 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2760346a-cdd2-4959-9cca-5bf87123f24a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:07.126937 containerd[2019]: time="2026-01-17T00:03:07.126863186Z" level=error msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" failed" error="failed to destroy network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.127492 kubelet[3407]: E0117 00:03:07.127185 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:07.127492 kubelet[3407]: E0117 00:03:07.127305 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b"} Jan 17 00:03:07.127492 kubelet[3407]: E0117 00:03:07.127359 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"100157b3-6e13-496d-9d2a-b11a40a79c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.127492 kubelet[3407]: E0117 00:03:07.127399 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"100157b3-6e13-496d-9d2a-b11a40a79c18\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p9b6s" podUID="100157b3-6e13-496d-9d2a-b11a40a79c18" Jan 17 00:03:07.137416 containerd[2019]: time="2026-01-17T00:03:07.137248298Z" level=error msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" failed" error="failed to destroy network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:03:07.137638 kubelet[3407]: E0117 00:03:07.137584 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:07.137741 kubelet[3407]: E0117 00:03:07.137649 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50"} Jan 17 00:03:07.137741 kubelet[3407]: E0117 00:03:07.137710 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:03:07.137909 kubelet[3407]: E0117 00:03:07.137749 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:07.143008 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e-shm.mount: Deactivated successfully. Jan 17 00:03:07.143235 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd-shm.mount: Deactivated successfully. Jan 17 00:03:07.143380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543-shm.mount: Deactivated successfully. Jan 17 00:03:07.143531 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b-shm.mount: Deactivated successfully. Jan 17 00:03:13.140888 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3049775095.mount: Deactivated successfully. Jan 17 00:03:13.199352 containerd[2019]: time="2026-01-17T00:03:13.199159280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:13.203681 containerd[2019]: time="2026-01-17T00:03:13.200654384Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:03:13.203681 containerd[2019]: time="2026-01-17T00:03:13.202035128Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:13.207010 containerd[2019]: time="2026-01-17T00:03:13.206931392Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:03:13.209120 containerd[2019]: time="2026-01-17T00:03:13.209043548Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.309910076s" Jan 17 00:03:13.209349 containerd[2019]: time="2026-01-17T00:03:13.209317424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:03:13.257730 containerd[2019]: time="2026-01-17T00:03:13.257336072Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:03:13.312792 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2751428654.mount: Deactivated successfully. Jan 17 00:03:13.352868 containerd[2019]: time="2026-01-17T00:03:13.352783376Z" level=info msg="CreateContainer within sandbox \"282c8d6458150c9d14c568c0f4143da3919400712ca7b863d5a9b42b0a1cd1c5\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d\"" Jan 17 00:03:13.356952 containerd[2019]: time="2026-01-17T00:03:13.354859364Z" level=info msg="StartContainer for \"f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d\"" Jan 17 00:03:13.444640 systemd[1]: Started cri-containerd-f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d.scope - libcontainer container f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d. Jan 17 00:03:13.542316 containerd[2019]: time="2026-01-17T00:03:13.542114709Z" level=info msg="StartContainer for \"f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d\" returns successfully" Jan 17 00:03:13.834905 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:03:13.835098 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:03:14.152294 kubelet[3407]: I0117 00:03:14.152077 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6frmc" podStartSLOduration=2.370358789 podStartE2EDuration="18.152047328s" podCreationTimestamp="2026-01-17 00:02:56 +0000 UTC" firstStartedPulling="2026-01-17 00:02:57.430109081 +0000 UTC m=+34.153019954" lastFinishedPulling="2026-01-17 00:03:13.211797644 +0000 UTC m=+49.934708493" observedRunningTime="2026-01-17 00:03:14.030324644 +0000 UTC m=+50.753235505" watchObservedRunningTime="2026-01-17 00:03:14.152047328 +0000 UTC m=+50.874958177" Jan 17 00:03:14.159373 containerd[2019]: time="2026-01-17T00:03:14.158744612Z" level=info msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.367 [INFO][4595] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.369 [INFO][4595] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" iface="eth0" netns="/var/run/netns/cni-bca47d5b-f689-6a00-6484-d86b859622db" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.369 [INFO][4595] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" iface="eth0" netns="/var/run/netns/cni-bca47d5b-f689-6a00-6484-d86b859622db" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.371 [INFO][4595] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" iface="eth0" netns="/var/run/netns/cni-bca47d5b-f689-6a00-6484-d86b859622db" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.371 [INFO][4595] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.371 [INFO][4595] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.485 [INFO][4609] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.485 [INFO][4609] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.485 [INFO][4609] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.510 [WARNING][4609] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.510 [INFO][4609] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.517 [INFO][4609] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:14.532533 containerd[2019]: 2026-01-17 00:03:14.527 [INFO][4595] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:14.538661 containerd[2019]: time="2026-01-17T00:03:14.532653862Z" level=info msg="TearDown network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" successfully" Jan 17 00:03:14.538661 containerd[2019]: time="2026-01-17T00:03:14.532720114Z" level=info msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" returns successfully" Jan 17 00:03:14.543575 systemd[1]: run-netns-cni\x2dbca47d5b\x2df689\x2d6a00\x2d6484\x2dd86b859622db.mount: Deactivated successfully. Jan 17 00:03:14.599885 kubelet[3407]: I0117 00:03:14.599075 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-backend-key-pair\") pod \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " Jan 17 00:03:14.599885 kubelet[3407]: I0117 00:03:14.599166 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-ca-bundle\") pod \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " Jan 17 00:03:14.599885 kubelet[3407]: I0117 00:03:14.599232 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6bng\" (UniqueName: \"kubernetes.io/projected/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-kube-api-access-q6bng\") pod \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\" (UID: \"78c76606-d1d8-421b-ba4e-8cdbad81bc9c\") " Jan 17 00:03:14.606030 kubelet[3407]: I0117 00:03:14.605658 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "78c76606-d1d8-421b-ba4e-8cdbad81bc9c" (UID: "78c76606-d1d8-421b-ba4e-8cdbad81bc9c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:03:14.617530 kubelet[3407]: I0117 00:03:14.615451 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-kube-api-access-q6bng" (OuterVolumeSpecName: "kube-api-access-q6bng") pod "78c76606-d1d8-421b-ba4e-8cdbad81bc9c" (UID: "78c76606-d1d8-421b-ba4e-8cdbad81bc9c"). InnerVolumeSpecName "kube-api-access-q6bng". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:03:14.616024 systemd[1]: var-lib-kubelet-pods-78c76606\x2dd1d8\x2d421b\x2dba4e\x2d8cdbad81bc9c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6bng.mount: Deactivated successfully. Jan 17 00:03:14.618830 kubelet[3407]: I0117 00:03:14.618700 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "78c76606-d1d8-421b-ba4e-8cdbad81bc9c" (UID: "78c76606-d1d8-421b-ba4e-8cdbad81bc9c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:03:14.625728 systemd[1]: var-lib-kubelet-pods-78c76606\x2dd1d8\x2d421b\x2dba4e\x2d8cdbad81bc9c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:03:14.699958 kubelet[3407]: I0117 00:03:14.699846 3407 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-backend-key-pair\") on node \"ip-172-31-30-130\" DevicePath \"\"" Jan 17 00:03:14.699958 kubelet[3407]: I0117 00:03:14.699895 3407 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-whisker-ca-bundle\") on node \"ip-172-31-30-130\" DevicePath \"\"" Jan 17 00:03:14.699958 kubelet[3407]: I0117 00:03:14.699918 3407 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q6bng\" (UniqueName: \"kubernetes.io/projected/78c76606-d1d8-421b-ba4e-8cdbad81bc9c-kube-api-access-q6bng\") on node \"ip-172-31-30-130\" DevicePath \"\"" Jan 17 00:03:14.990088 systemd[1]: Removed slice kubepods-besteffort-pod78c76606_d1d8_421b_ba4e_8cdbad81bc9c.slice - libcontainer container kubepods-besteffort-pod78c76606_d1d8_421b_ba4e_8cdbad81bc9c.slice. Jan 17 00:03:15.146500 systemd[1]: run-containerd-runc-k8s.io-f8857140e800261847d90ea01ff74d2535903fca8c3e7e3119737befcd5ac59d-runc.PFUWcQ.mount: Deactivated successfully. Jan 17 00:03:15.171434 systemd[1]: Created slice kubepods-besteffort-pod82ea16ac_8d68_4d2e_9ce1_f2b920201dc6.slice - libcontainer container kubepods-besteffort-pod82ea16ac_8d68_4d2e_9ce1_f2b920201dc6.slice. Jan 17 00:03:15.214593 kubelet[3407]: I0117 00:03:15.214521 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82ea16ac-8d68-4d2e-9ce1-f2b920201dc6-whisker-backend-key-pair\") pod \"whisker-6886fb9d84-zxzfv\" (UID: \"82ea16ac-8d68-4d2e-9ce1-f2b920201dc6\") " pod="calico-system/whisker-6886fb9d84-zxzfv" Jan 17 00:03:15.215173 kubelet[3407]: I0117 00:03:15.214611 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwj5s\" (UniqueName: \"kubernetes.io/projected/82ea16ac-8d68-4d2e-9ce1-f2b920201dc6-kube-api-access-kwj5s\") pod \"whisker-6886fb9d84-zxzfv\" (UID: \"82ea16ac-8d68-4d2e-9ce1-f2b920201dc6\") " pod="calico-system/whisker-6886fb9d84-zxzfv" Jan 17 00:03:15.215173 kubelet[3407]: I0117 00:03:15.214667 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82ea16ac-8d68-4d2e-9ce1-f2b920201dc6-whisker-ca-bundle\") pod \"whisker-6886fb9d84-zxzfv\" (UID: \"82ea16ac-8d68-4d2e-9ce1-f2b920201dc6\") " pod="calico-system/whisker-6886fb9d84-zxzfv" Jan 17 00:03:15.482888 containerd[2019]: time="2026-01-17T00:03:15.482782583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6886fb9d84-zxzfv,Uid:82ea16ac-8d68-4d2e-9ce1-f2b920201dc6,Namespace:calico-system,Attempt:0,}" Jan 17 00:03:15.525254 kubelet[3407]: I0117 00:03:15.523619 3407 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c76606-d1d8-421b-ba4e-8cdbad81bc9c" path="/var/lib/kubelet/pods/78c76606-d1d8-421b-ba4e-8cdbad81bc9c/volumes" Jan 17 00:03:15.713460 (udev-worker)[4561]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:03:15.715890 systemd-networkd[1941]: cali87648f25ba6: Link UP Jan 17 00:03:15.718095 systemd-networkd[1941]: cali87648f25ba6: Gained carrier Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.569 [INFO][4654] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.591 [INFO][4654] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0 whisker-6886fb9d84- calico-system 82ea16ac-8d68-4d2e-9ce1-f2b920201dc6 940 0 2026-01-17 00:03:15 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6886fb9d84 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-30-130 whisker-6886fb9d84-zxzfv eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali87648f25ba6 [] [] }} ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.591 [INFO][4654] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.636 [INFO][4666] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" HandleID="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Workload="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.636 [INFO][4666] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" HandleID="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Workload="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afa0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-130", "pod":"whisker-6886fb9d84-zxzfv", "timestamp":"2026-01-17 00:03:15.636132552 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.636 [INFO][4666] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.636 [INFO][4666] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.636 [INFO][4666] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.650 [INFO][4666] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.661 [INFO][4666] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.671 [INFO][4666] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.674 [INFO][4666] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.678 [INFO][4666] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.678 [INFO][4666] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.680 [INFO][4666] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.688 [INFO][4666] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.698 [INFO][4666] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.65/26] block=192.168.121.64/26 handle="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.698 [INFO][4666] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.65/26] handle="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" host="ip-172-31-30-130" Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.698 [INFO][4666] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:15.755837 containerd[2019]: 2026-01-17 00:03:15.698 [INFO][4666] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.65/26] IPv6=[] ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" HandleID="k8s-pod-network.cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Workload="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.702 [INFO][4654] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0", GenerateName:"whisker-6886fb9d84-", Namespace:"calico-system", SelfLink:"", UID:"82ea16ac-8d68-4d2e-9ce1-f2b920201dc6", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6886fb9d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"whisker-6886fb9d84-zxzfv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87648f25ba6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.702 [INFO][4654] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.65/32] ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.702 [INFO][4654] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87648f25ba6 ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.719 [INFO][4654] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.720 [INFO][4654] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0", GenerateName:"whisker-6886fb9d84-", Namespace:"calico-system", SelfLink:"", UID:"82ea16ac-8d68-4d2e-9ce1-f2b920201dc6", ResourceVersion:"940", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 3, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6886fb9d84", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a", Pod:"whisker-6886fb9d84-zxzfv", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.121.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali87648f25ba6", MAC:"a6:aa:d4:37:e8:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:15.758328 containerd[2019]: 2026-01-17 00:03:15.749 [INFO][4654] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a" Namespace="calico-system" Pod="whisker-6886fb9d84-zxzfv" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--6886fb9d84--zxzfv-eth0" Jan 17 00:03:15.788852 containerd[2019]: time="2026-01-17T00:03:15.788329105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:15.788852 containerd[2019]: time="2026-01-17T00:03:15.788435425Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:15.788852 containerd[2019]: time="2026-01-17T00:03:15.788460985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:15.788852 containerd[2019]: time="2026-01-17T00:03:15.788609005Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:15.824498 systemd[1]: Started cri-containerd-cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a.scope - libcontainer container cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a. Jan 17 00:03:15.963557 containerd[2019]: time="2026-01-17T00:03:15.963331189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6886fb9d84-zxzfv,Uid:82ea16ac-8d68-4d2e-9ce1-f2b920201dc6,Namespace:calico-system,Attempt:0,} returns sandbox id \"cc0c37f5a761d22ea2e9f80b0793505188c029065dd7c721c33e136493c9dd9a\"" Jan 17 00:03:15.969179 containerd[2019]: time="2026-01-17T00:03:15.968566645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:03:16.286303 containerd[2019]: time="2026-01-17T00:03:16.286193699Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:16.288680 containerd[2019]: time="2026-01-17T00:03:16.288535151Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:03:16.288680 containerd[2019]: time="2026-01-17T00:03:16.288631883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:03:16.288924 kubelet[3407]: E0117 00:03:16.288852 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:16.289510 kubelet[3407]: E0117 00:03:16.288922 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:16.293714 kubelet[3407]: E0117 00:03:16.293591 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6a0029da1bf5401b94096282928a075e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:16.298212 containerd[2019]: time="2026-01-17T00:03:16.298150895Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:03:16.598613 containerd[2019]: time="2026-01-17T00:03:16.598541641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:16.600763 containerd[2019]: time="2026-01-17T00:03:16.600665893Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:03:16.600926 containerd[2019]: time="2026-01-17T00:03:16.600861025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:16.601121 kubelet[3407]: E0117 00:03:16.601059 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:16.601242 kubelet[3407]: E0117 00:03:16.601133 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:16.601439 kubelet[3407]: E0117 00:03:16.601324 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:16.603039 kubelet[3407]: E0117 00:03:16.602945 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:03:16.977609 kubelet[3407]: E0117 00:03:16.977432 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:03:17.044095 systemd-networkd[1941]: cali87648f25ba6: Gained IPv6LL Jan 17 00:03:17.120240 kernel: bpftool[4842]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:03:17.490731 (udev-worker)[4562]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:03:17.513673 systemd-networkd[1941]: vxlan.calico: Link UP Jan 17 00:03:17.513687 systemd-networkd[1941]: vxlan.calico: Gained carrier Jan 17 00:03:17.515959 containerd[2019]: time="2026-01-17T00:03:17.515900665Z" level=info msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.696 [INFO][4875] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.696 [INFO][4875] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" iface="eth0" netns="/var/run/netns/cni-13fe5fd1-e845-d1d4-12e8-da1079be387c" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.697 [INFO][4875] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" iface="eth0" netns="/var/run/netns/cni-13fe5fd1-e845-d1d4-12e8-da1079be387c" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.697 [INFO][4875] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" iface="eth0" netns="/var/run/netns/cni-13fe5fd1-e845-d1d4-12e8-da1079be387c" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.697 [INFO][4875] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.697 [INFO][4875] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.746 [INFO][4895] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.746 [INFO][4895] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.746 [INFO][4895] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.763 [WARNING][4895] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.763 [INFO][4895] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.766 [INFO][4895] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:17.773624 containerd[2019]: 2026-01-17 00:03:17.770 [INFO][4875] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:17.776619 containerd[2019]: time="2026-01-17T00:03:17.776389178Z" level=info msg="TearDown network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" successfully" Jan 17 00:03:17.776619 containerd[2019]: time="2026-01-17T00:03:17.776458994Z" level=info msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" returns successfully" Jan 17 00:03:17.779074 containerd[2019]: time="2026-01-17T00:03:17.779007146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-b9cdk,Uid:cecd0bc0-a5de-49ac-853f-0e0f9c309bd4,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:03:17.782249 systemd[1]: run-netns-cni\x2d13fe5fd1\x2de845\x2dd1d4\x2d12e8\x2dda1079be387c.mount: Deactivated successfully. Jan 17 00:03:17.988005 kubelet[3407]: E0117 00:03:17.987627 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:03:18.128827 systemd-networkd[1941]: cali3075b529f58: Link UP Jan 17 00:03:18.133494 systemd-networkd[1941]: cali3075b529f58: Gained carrier Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:17.926 [INFO][4902] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0 calico-apiserver-76869b6969- calico-apiserver cecd0bc0-a5de-49ac-853f-0e0f9c309bd4 966 0 2026-01-17 00:02:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76869b6969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-130 calico-apiserver-76869b6969-b9cdk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3075b529f58 [] [] }} ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:17.927 [INFO][4902] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.009 [INFO][4915] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" HandleID="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.009 [INFO][4915] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" HandleID="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032ac60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-130", "pod":"calico-apiserver-76869b6969-b9cdk", "timestamp":"2026-01-17 00:03:18.009393684 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.010 [INFO][4915] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.010 [INFO][4915] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.010 [INFO][4915] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.038 [INFO][4915] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.060 [INFO][4915] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.075 [INFO][4915] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.079 [INFO][4915] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.083 [INFO][4915] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.083 [INFO][4915] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.087 [INFO][4915] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0 Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.102 [INFO][4915] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.111 [INFO][4915] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.66/26] block=192.168.121.64/26 handle="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.112 [INFO][4915] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.66/26] handle="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" host="ip-172-31-30-130" Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.112 [INFO][4915] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:18.171434 containerd[2019]: 2026-01-17 00:03:18.112 [INFO][4915] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.66/26] IPv6=[] ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" HandleID="k8s-pod-network.9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.118 [INFO][4902] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"calico-apiserver-76869b6969-b9cdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3075b529f58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.119 [INFO][4902] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.66/32] ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.119 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3075b529f58 ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.132 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.138 [INFO][4902] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4", ResourceVersion:"966", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0", Pod:"calico-apiserver-76869b6969-b9cdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3075b529f58", MAC:"5e:2e:23:71:64:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:18.177588 containerd[2019]: 2026-01-17 00:03:18.161 [INFO][4902] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-b9cdk" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:18.237114 containerd[2019]: time="2026-01-17T00:03:18.235010653Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:18.237114 containerd[2019]: time="2026-01-17T00:03:18.235139017Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:18.237114 containerd[2019]: time="2026-01-17T00:03:18.235176925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:18.237114 containerd[2019]: time="2026-01-17T00:03:18.236961373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:18.310585 systemd[1]: Started cri-containerd-9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0.scope - libcontainer container 9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0. Jan 17 00:03:18.440401 containerd[2019]: time="2026-01-17T00:03:18.440086142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-b9cdk,Uid:cecd0bc0-a5de-49ac-853f-0e0f9c309bd4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0\"" Jan 17 00:03:18.464240 containerd[2019]: time="2026-01-17T00:03:18.463459718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:18.516662 containerd[2019]: time="2026-01-17T00:03:18.516452474Z" level=info msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.606 [INFO][5016] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.607 [INFO][5016] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" iface="eth0" netns="/var/run/netns/cni-c8119344-3f97-50df-e02f-f73c3f9a5c71" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.607 [INFO][5016] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" iface="eth0" netns="/var/run/netns/cni-c8119344-3f97-50df-e02f-f73c3f9a5c71" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.607 [INFO][5016] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" iface="eth0" netns="/var/run/netns/cni-c8119344-3f97-50df-e02f-f73c3f9a5c71" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.608 [INFO][5016] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.608 [INFO][5016] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.649 [INFO][5024] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.650 [INFO][5024] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.650 [INFO][5024] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.668 [WARNING][5024] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.668 [INFO][5024] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.671 [INFO][5024] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:18.677658 containerd[2019]: 2026-01-17 00:03:18.673 [INFO][5016] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:18.678927 containerd[2019]: time="2026-01-17T00:03:18.677871075Z" level=info msg="TearDown network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" successfully" Jan 17 00:03:18.678927 containerd[2019]: time="2026-01-17T00:03:18.677910207Z" level=info msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" returns successfully" Jan 17 00:03:18.679550 containerd[2019]: time="2026-01-17T00:03:18.679486251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f478bb65-pq6fw,Uid:b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:18.787377 systemd[1]: run-netns-cni\x2dc8119344\x2d3f97\x2d50df\x2de02f\x2df73c3f9a5c71.mount: Deactivated successfully. Jan 17 00:03:18.789494 containerd[2019]: time="2026-01-17T00:03:18.789058647Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:18.798778 containerd[2019]: time="2026-01-17T00:03:18.798679311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:18.799674 containerd[2019]: time="2026-01-17T00:03:18.798840651Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:18.801400 kubelet[3407]: E0117 00:03:18.799089 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:18.801400 kubelet[3407]: E0117 00:03:18.799148 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:18.801400 kubelet[3407]: E0117 00:03:18.799512 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfn9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:18.801400 kubelet[3407]: E0117 00:03:18.800882 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:18.912603 systemd-networkd[1941]: calid24b52934d7: Link UP Jan 17 00:03:18.915171 systemd-networkd[1941]: calid24b52934d7: Gained carrier Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.770 [INFO][5031] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0 calico-kube-controllers-67f478bb65- calico-system b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31 979 0 2026-01-17 00:02:57 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67f478bb65 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-30-130 calico-kube-controllers-67f478bb65-pq6fw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid24b52934d7 [] [] }} ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.771 [INFO][5031] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.839 [INFO][5043] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" HandleID="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.839 [INFO][5043] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" HandleID="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400043a570), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-130", "pod":"calico-kube-controllers-67f478bb65-pq6fw", "timestamp":"2026-01-17 00:03:18.839562316 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.840 [INFO][5043] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.840 [INFO][5043] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.840 [INFO][5043] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.855 [INFO][5043] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.862 [INFO][5043] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.869 [INFO][5043] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.873 [INFO][5043] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.877 [INFO][5043] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.877 [INFO][5043] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.879 [INFO][5043] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46 Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.886 [INFO][5043] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.902 [INFO][5043] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.67/26] block=192.168.121.64/26 handle="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.902 [INFO][5043] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.67/26] handle="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" host="ip-172-31-30-130" Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.902 [INFO][5043] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:18.949342 containerd[2019]: 2026-01-17 00:03:18.902 [INFO][5043] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.67/26] IPv6=[] ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" HandleID="k8s-pod-network.4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.906 [INFO][5031] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0", GenerateName:"calico-kube-controllers-67f478bb65-", Namespace:"calico-system", SelfLink:"", UID:"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f478bb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"calico-kube-controllers-67f478bb65-pq6fw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid24b52934d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.906 [INFO][5031] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.67/32] ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.906 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid24b52934d7 ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.917 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.919 [INFO][5031] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0", GenerateName:"calico-kube-controllers-67f478bb65-", Namespace:"calico-system", SelfLink:"", UID:"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f478bb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46", Pod:"calico-kube-controllers-67f478bb65-pq6fw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid24b52934d7", MAC:"f2:be:62:11:83:42", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:18.951462 containerd[2019]: 2026-01-17 00:03:18.940 [INFO][5031] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46" Namespace="calico-system" Pod="calico-kube-controllers-67f478bb65-pq6fw" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:18.992496 kubelet[3407]: E0117 00:03:18.992427 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:19.016357 containerd[2019]: time="2026-01-17T00:03:19.010443673Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:19.016357 containerd[2019]: time="2026-01-17T00:03:19.010545637Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:19.016357 containerd[2019]: time="2026-01-17T00:03:19.010573369Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:19.016357 containerd[2019]: time="2026-01-17T00:03:19.010734397Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:19.082808 systemd[1]: Started cri-containerd-4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46.scope - libcontainer container 4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46. Jan 17 00:03:19.168082 containerd[2019]: time="2026-01-17T00:03:19.168025297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f478bb65-pq6fw,Uid:b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31,Namespace:calico-system,Attempt:1,} returns sandbox id \"4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46\"" Jan 17 00:03:19.170998 containerd[2019]: time="2026-01-17T00:03:19.170686561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:19.346637 systemd-networkd[1941]: vxlan.calico: Gained IPv6LL Jan 17 00:03:19.445653 containerd[2019]: time="2026-01-17T00:03:19.445436895Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:19.447815 containerd[2019]: time="2026-01-17T00:03:19.447615807Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:19.447815 containerd[2019]: time="2026-01-17T00:03:19.447761367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:19.448147 kubelet[3407]: E0117 00:03:19.448067 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:19.448276 kubelet[3407]: E0117 00:03:19.448161 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:19.449733 kubelet[3407]: E0117 00:03:19.448917 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmds6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:19.450328 kubelet[3407]: E0117 00:03:19.450257 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:19.511635 containerd[2019]: time="2026-01-17T00:03:19.511563507Z" level=info msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" Jan 17 00:03:19.524749 containerd[2019]: time="2026-01-17T00:03:19.524311215Z" level=info msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" Jan 17 00:03:19.733188 systemd-networkd[1941]: cali3075b529f58: Gained IPv6LL Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.639 [INFO][5116] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.641 [INFO][5116] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" iface="eth0" netns="/var/run/netns/cni-c0214e18-7e1b-eefd-0b0c-1ba8fe253fb6" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.641 [INFO][5116] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" iface="eth0" netns="/var/run/netns/cni-c0214e18-7e1b-eefd-0b0c-1ba8fe253fb6" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.642 [INFO][5116] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" iface="eth0" netns="/var/run/netns/cni-c0214e18-7e1b-eefd-0b0c-1ba8fe253fb6" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.642 [INFO][5116] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.642 [INFO][5116] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.707 [INFO][5132] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.707 [INFO][5132] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.707 [INFO][5132] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.728 [WARNING][5132] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.728 [INFO][5132] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.735 [INFO][5132] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:19.747967 containerd[2019]: 2026-01-17 00:03:19.742 [INFO][5116] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:19.749885 containerd[2019]: time="2026-01-17T00:03:19.748306780Z" level=info msg="TearDown network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" successfully" Jan 17 00:03:19.749885 containerd[2019]: time="2026-01-17T00:03:19.748349752Z" level=info msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" returns successfully" Jan 17 00:03:19.754774 systemd[1]: run-netns-cni\x2dc0214e18\x2d7e1b\x2deefd\x2d0b0c\x2d1ba8fe253fb6.mount: Deactivated successfully. Jan 17 00:03:19.755978 containerd[2019]: time="2026-01-17T00:03:19.754829380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982qf,Uid:3bc218c2-324a-4549-a1e2-ab9fb6d1d96d,Namespace:kube-system,Attempt:1,}" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.662 [INFO][5120] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.662 [INFO][5120] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" iface="eth0" netns="/var/run/netns/cni-08df5b7c-da5b-aa90-f1db-ac1a07e0b118" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.664 [INFO][5120] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" iface="eth0" netns="/var/run/netns/cni-08df5b7c-da5b-aa90-f1db-ac1a07e0b118" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.665 [INFO][5120] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" iface="eth0" netns="/var/run/netns/cni-08df5b7c-da5b-aa90-f1db-ac1a07e0b118" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.666 [INFO][5120] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.666 [INFO][5120] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.736 [INFO][5137] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.736 [INFO][5137] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.737 [INFO][5137] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.764 [WARNING][5137] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.764 [INFO][5137] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.767 [INFO][5137] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:19.780595 containerd[2019]: 2026-01-17 00:03:19.771 [INFO][5120] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:19.780666 systemd[1]: run-netns-cni\x2d08df5b7c\x2dda5b\x2daa90\x2df1db\x2dac1a07e0b118.mount: Deactivated successfully. Jan 17 00:03:19.785266 containerd[2019]: time="2026-01-17T00:03:19.785158072Z" level=info msg="TearDown network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" successfully" Jan 17 00:03:19.785266 containerd[2019]: time="2026-01-17T00:03:19.785238148Z" level=info msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" returns successfully" Jan 17 00:03:19.786765 containerd[2019]: time="2026-01-17T00:03:19.786318892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dfrr7,Uid:2760346a-cdd2-4959-9cca-5bf87123f24a,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:20.001231 kubelet[3407]: E0117 00:03:20.000731 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:20.003995 kubelet[3407]: E0117 00:03:20.003676 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:20.117643 systemd-networkd[1941]: cali388d096fd98: Link UP Jan 17 00:03:20.121003 systemd-networkd[1941]: cali388d096fd98: Gained carrier Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.913 [INFO][5145] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0 coredns-668d6bf9bc- kube-system 3bc218c2-324a-4549-a1e2-ab9fb6d1d96d 996 0 2026-01-17 00:02:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-130 coredns-668d6bf9bc-982qf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali388d096fd98 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.914 [INFO][5145] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.998 [INFO][5169] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" HandleID="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.998 [INFO][5169] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" HandleID="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002cb0d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-130", "pod":"coredns-668d6bf9bc-982qf", "timestamp":"2026-01-17 00:03:19.998011541 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.998 [INFO][5169] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.998 [INFO][5169] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:19.998 [INFO][5169] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.047 [INFO][5169] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.067 [INFO][5169] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.074 [INFO][5169] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.077 [INFO][5169] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.081 [INFO][5169] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.081 [INFO][5169] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.084 [INFO][5169] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316 Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.090 [INFO][5169] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5169] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.68/26] block=192.168.121.64/26 handle="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5169] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.68/26] handle="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" host="ip-172-31-30-130" Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5169] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:20.158941 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5169] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.68/26] IPv6=[] ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" HandleID="k8s-pod-network.d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.108 [INFO][5145] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"coredns-668d6bf9bc-982qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali388d096fd98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.108 [INFO][5145] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.68/32] ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.109 [INFO][5145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali388d096fd98 ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.122 [INFO][5145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.125 [INFO][5145] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d", ResourceVersion:"996", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316", Pod:"coredns-668d6bf9bc-982qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali388d096fd98", MAC:"a6:9c:af:b2:5a:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:20.161741 containerd[2019]: 2026-01-17 00:03:20.152 [INFO][5145] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316" Namespace="kube-system" Pod="coredns-668d6bf9bc-982qf" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:20.238651 containerd[2019]: time="2026-01-17T00:03:20.237318423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:20.238651 containerd[2019]: time="2026-01-17T00:03:20.237833859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:20.238651 containerd[2019]: time="2026-01-17T00:03:20.237889803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:20.244452 containerd[2019]: time="2026-01-17T00:03:20.243722247Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:20.248626 systemd-networkd[1941]: cali2325dc0e017: Link UP Jan 17 00:03:20.250792 systemd-networkd[1941]: cali2325dc0e017: Gained carrier Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:19.943 [INFO][5154] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0 goldmane-666569f655- calico-system 2760346a-cdd2-4959-9cca-5bf87123f24a 997 0 2026-01-17 00:02:49 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-30-130 goldmane-666569f655-dfrr7 eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali2325dc0e017 [] [] }} ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:19.943 [INFO][5154] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.037 [INFO][5174] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" HandleID="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.037 [INFO][5174] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" HandleID="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030c1f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-130", "pod":"goldmane-666569f655-dfrr7", "timestamp":"2026-01-17 00:03:20.037554914 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.037 [INFO][5174] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5174] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.104 [INFO][5174] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.145 [INFO][5174] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.169 [INFO][5174] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.182 [INFO][5174] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.186 [INFO][5174] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.192 [INFO][5174] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.192 [INFO][5174] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.197 [INFO][5174] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9 Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.204 [INFO][5174] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.226 [INFO][5174] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.69/26] block=192.168.121.64/26 handle="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.226 [INFO][5174] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.69/26] handle="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" host="ip-172-31-30-130" Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.226 [INFO][5174] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:20.307377 containerd[2019]: 2026-01-17 00:03:20.229 [INFO][5174] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.69/26] IPv6=[] ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" HandleID="k8s-pod-network.4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.237 [INFO][5154] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2760346a-cdd2-4959-9cca-5bf87123f24a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"goldmane-666569f655-dfrr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2325dc0e017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.237 [INFO][5154] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.69/32] ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.237 [INFO][5154] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2325dc0e017 ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.253 [INFO][5154] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.257 [INFO][5154] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2760346a-cdd2-4959-9cca-5bf87123f24a", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9", Pod:"goldmane-666569f655-dfrr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2325dc0e017", MAC:"6e:02:47:23:24:35", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:20.313011 containerd[2019]: 2026-01-17 00:03:20.292 [INFO][5154] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9" Namespace="calico-system" Pod="goldmane-666569f655-dfrr7" WorkloadEndpoint="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:20.316539 systemd[1]: Started cri-containerd-d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316.scope - libcontainer container d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316. Jan 17 00:03:20.395581 containerd[2019]: time="2026-01-17T00:03:20.395280291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:20.395853 containerd[2019]: time="2026-01-17T00:03:20.395540499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:20.396046 containerd[2019]: time="2026-01-17T00:03:20.395862411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:20.397291 containerd[2019]: time="2026-01-17T00:03:20.397138455Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:20.441537 systemd[1]: Started cri-containerd-4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9.scope - libcontainer container 4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9. Jan 17 00:03:20.453562 containerd[2019]: time="2026-01-17T00:03:20.453461464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-982qf,Uid:3bc218c2-324a-4549-a1e2-ab9fb6d1d96d,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316\"" Jan 17 00:03:20.464377 containerd[2019]: time="2026-01-17T00:03:20.464149156Z" level=info msg="CreateContainer within sandbox \"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:03:20.495232 containerd[2019]: time="2026-01-17T00:03:20.495132316Z" level=info msg="CreateContainer within sandbox \"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5983992056855893b46d4c49e71a8b5ded7f4bfca32daaa5b40c1d61e9b1977e\"" Jan 17 00:03:20.497705 containerd[2019]: time="2026-01-17T00:03:20.497077912Z" level=info msg="StartContainer for \"5983992056855893b46d4c49e71a8b5ded7f4bfca32daaa5b40c1d61e9b1977e\"" Jan 17 00:03:20.512928 containerd[2019]: time="2026-01-17T00:03:20.512753932Z" level=info msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" Jan 17 00:03:20.560091 containerd[2019]: time="2026-01-17T00:03:20.560025520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-dfrr7,Uid:2760346a-cdd2-4959-9cca-5bf87123f24a,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9\"" Jan 17 00:03:20.567525 containerd[2019]: time="2026-01-17T00:03:20.566338048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:20.599084 systemd[1]: Started cri-containerd-5983992056855893b46d4c49e71a8b5ded7f4bfca32daaa5b40c1d61e9b1977e.scope - libcontainer container 5983992056855893b46d4c49e71a8b5ded7f4bfca32daaa5b40c1d61e9b1977e. Jan 17 00:03:20.677986 containerd[2019]: time="2026-01-17T00:03:20.677838425Z" level=info msg="StartContainer for \"5983992056855893b46d4c49e71a8b5ded7f4bfca32daaa5b40c1d61e9b1977e\" returns successfully" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.686 [INFO][5297] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.686 [INFO][5297] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" iface="eth0" netns="/var/run/netns/cni-9bef75a5-0e5e-94c2-491c-c56615cb77c9" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.687 [INFO][5297] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" iface="eth0" netns="/var/run/netns/cni-9bef75a5-0e5e-94c2-491c-c56615cb77c9" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.689 [INFO][5297] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" iface="eth0" netns="/var/run/netns/cni-9bef75a5-0e5e-94c2-491c-c56615cb77c9" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.689 [INFO][5297] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.689 [INFO][5297] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.739 [INFO][5331] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.739 [INFO][5331] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.739 [INFO][5331] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.760 [WARNING][5331] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.760 [INFO][5331] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.764 [INFO][5331] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:20.773439 containerd[2019]: 2026-01-17 00:03:20.768 [INFO][5297] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:20.774531 containerd[2019]: time="2026-01-17T00:03:20.774468485Z" level=info msg="TearDown network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" successfully" Jan 17 00:03:20.774598 containerd[2019]: time="2026-01-17T00:03:20.774526697Z" level=info msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" returns successfully" Jan 17 00:03:20.780773 containerd[2019]: time="2026-01-17T00:03:20.780700541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-4smvz,Uid:98e123b4-3ef3-4dbb-b304-2875273a6844,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:03:20.798724 systemd[1]: run-netns-cni\x2d9bef75a5\x2d0e5e\x2d94c2\x2d491c\x2dc56615cb77c9.mount: Deactivated successfully. Jan 17 00:03:20.818538 systemd-networkd[1941]: calid24b52934d7: Gained IPv6LL Jan 17 00:03:20.852544 containerd[2019]: time="2026-01-17T00:03:20.852011490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:20.854538 containerd[2019]: time="2026-01-17T00:03:20.854430522Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:20.855245 containerd[2019]: time="2026-01-17T00:03:20.854653482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:20.855342 kubelet[3407]: E0117 00:03:20.854909 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:20.855342 kubelet[3407]: E0117 00:03:20.854966 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:20.858108 kubelet[3407]: E0117 00:03:20.855156 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t94sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:20.859628 kubelet[3407]: E0117 00:03:20.859316 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:21.021314 kubelet[3407]: E0117 00:03:21.021096 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:21.024127 kubelet[3407]: E0117 00:03:21.023769 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:21.053089 kubelet[3407]: I0117 00:03:21.052980 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-982qf" podStartSLOduration=53.052956543 podStartE2EDuration="53.052956543s" podCreationTimestamp="2026-01-17 00:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:03:21.051674355 +0000 UTC m=+57.774585216" watchObservedRunningTime="2026-01-17 00:03:21.052956543 +0000 UTC m=+57.775867404" Jan 17 00:03:21.115692 systemd-networkd[1941]: cali58415f027bb: Link UP Jan 17 00:03:21.118819 systemd-networkd[1941]: cali58415f027bb: Gained carrier Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.927 [INFO][5341] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0 calico-apiserver-76869b6969- calico-apiserver 98e123b4-3ef3-4dbb-b304-2875273a6844 1016 0 2026-01-17 00:02:41 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:76869b6969 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-30-130 calico-apiserver-76869b6969-4smvz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali58415f027bb [] [] }} ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.927 [INFO][5341] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.978 [INFO][5353] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" HandleID="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.978 [INFO][5353] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" HandleID="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024afe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-30-130", "pod":"calico-apiserver-76869b6969-4smvz", "timestamp":"2026-01-17 00:03:20.97840671 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.978 [INFO][5353] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.978 [INFO][5353] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.978 [INFO][5353] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:20.992 [INFO][5353] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.003 [INFO][5353] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.017 [INFO][5353] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.027 [INFO][5353] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.033 [INFO][5353] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.033 [INFO][5353] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.045 [INFO][5353] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26 Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.070 [INFO][5353] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.100 [INFO][5353] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.70/26] block=192.168.121.64/26 handle="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.100 [INFO][5353] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.70/26] handle="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" host="ip-172-31-30-130" Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.100 [INFO][5353] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:21.162810 containerd[2019]: 2026-01-17 00:03:21.101 [INFO][5353] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.70/26] IPv6=[] ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" HandleID="k8s-pod-network.7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.108 [INFO][5341] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"98e123b4-3ef3-4dbb-b304-2875273a6844", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"calico-apiserver-76869b6969-4smvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58415f027bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.108 [INFO][5341] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.70/32] ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.109 [INFO][5341] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali58415f027bb ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.118 [INFO][5341] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.121 [INFO][5341] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"98e123b4-3ef3-4dbb-b304-2875273a6844", ResourceVersion:"1016", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26", Pod:"calico-apiserver-76869b6969-4smvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58415f027bb", MAC:"f2:84:54:fb:86:98", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:21.166795 containerd[2019]: 2026-01-17 00:03:21.150 [INFO][5341] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26" Namespace="calico-apiserver" Pod="calico-apiserver-76869b6969-4smvz" WorkloadEndpoint="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:21.221771 containerd[2019]: time="2026-01-17T00:03:21.218145927Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:21.221771 containerd[2019]: time="2026-01-17T00:03:21.218277231Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:21.221771 containerd[2019]: time="2026-01-17T00:03:21.218332851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:21.221771 containerd[2019]: time="2026-01-17T00:03:21.218552295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:21.298539 systemd[1]: Started cri-containerd-7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26.scope - libcontainer container 7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26. Jan 17 00:03:21.479990 containerd[2019]: time="2026-01-17T00:03:21.479912909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76869b6969-4smvz,Uid:98e123b4-3ef3-4dbb-b304-2875273a6844,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26\"" Jan 17 00:03:21.485082 containerd[2019]: time="2026-01-17T00:03:21.484847069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:21.513408 containerd[2019]: time="2026-01-17T00:03:21.513338861Z" level=info msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.600 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.601 [INFO][5424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" iface="eth0" netns="/var/run/netns/cni-69ec296c-ec26-ea7a-5a3e-7ae730504657" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.602 [INFO][5424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" iface="eth0" netns="/var/run/netns/cni-69ec296c-ec26-ea7a-5a3e-7ae730504657" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.606 [INFO][5424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" iface="eth0" netns="/var/run/netns/cni-69ec296c-ec26-ea7a-5a3e-7ae730504657" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.606 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.606 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.647 [INFO][5431] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.647 [INFO][5431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.648 [INFO][5431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.663 [WARNING][5431] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.663 [INFO][5431] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.666 [INFO][5431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:21.675125 containerd[2019]: 2026-01-17 00:03:21.671 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:21.677518 containerd[2019]: time="2026-01-17T00:03:21.675398274Z" level=info msg="TearDown network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" successfully" Jan 17 00:03:21.677518 containerd[2019]: time="2026-01-17T00:03:21.675439446Z" level=info msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" returns successfully" Jan 17 00:03:21.677518 containerd[2019]: time="2026-01-17T00:03:21.676425030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9b6s,Uid:100157b3-6e13-496d-9d2a-b11a40a79c18,Namespace:kube-system,Attempt:1,}" Jan 17 00:03:21.714568 systemd-networkd[1941]: cali388d096fd98: Gained IPv6LL Jan 17 00:03:21.779453 containerd[2019]: time="2026-01-17T00:03:21.779017302Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:21.783829 containerd[2019]: time="2026-01-17T00:03:21.783679326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:21.786271 containerd[2019]: time="2026-01-17T00:03:21.785659854Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:21.787411 kubelet[3407]: E0117 00:03:21.786404 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:21.787960 kubelet[3407]: E0117 00:03:21.787418 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:21.787960 kubelet[3407]: E0117 00:03:21.787866 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:21.791386 systemd[1]: run-netns-cni\x2d69ec296c\x2dec26\x2dea7a\x2d5a3e\x2d7ae730504657.mount: Deactivated successfully. Jan 17 00:03:21.793330 kubelet[3407]: E0117 00:03:21.792551 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:21.922311 systemd-networkd[1941]: cali3b16d4e9a34: Link UP Jan 17 00:03:21.925897 systemd-networkd[1941]: cali3b16d4e9a34: Gained carrier Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.768 [INFO][5438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0 coredns-668d6bf9bc- kube-system 100157b3-6e13-496d-9d2a-b11a40a79c18 1039 0 2026-01-17 00:02:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-30-130 coredns-668d6bf9bc-p9b6s eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3b16d4e9a34 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.768 [INFO][5438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.835 [INFO][5449] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" HandleID="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.836 [INFO][5449] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" HandleID="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003395f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-30-130", "pod":"coredns-668d6bf9bc-p9b6s", "timestamp":"2026-01-17 00:03:21.835832707 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.836 [INFO][5449] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.836 [INFO][5449] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.836 [INFO][5449] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.851 [INFO][5449] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.858 [INFO][5449] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.866 [INFO][5449] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.869 [INFO][5449] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.873 [INFO][5449] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.873 [INFO][5449] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.876 [INFO][5449] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6 Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.885 [INFO][5449] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.907 [INFO][5449] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.71/26] block=192.168.121.64/26 handle="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.908 [INFO][5449] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.71/26] handle="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" host="ip-172-31-30-130" Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.908 [INFO][5449] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:21.979880 containerd[2019]: 2026-01-17 00:03:21.908 [INFO][5449] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.71/26] IPv6=[] ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" HandleID="k8s-pod-network.e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.915 [INFO][5438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100157b3-6e13-496d-9d2a-b11a40a79c18", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"coredns-668d6bf9bc-p9b6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b16d4e9a34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.915 [INFO][5438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.71/32] ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.915 [INFO][5438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b16d4e9a34 ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.928 [INFO][5438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.933 [INFO][5438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100157b3-6e13-496d-9d2a-b11a40a79c18", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6", Pod:"coredns-668d6bf9bc-p9b6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b16d4e9a34", MAC:"ea:84:ba:90:ec:4b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:21.983692 containerd[2019]: 2026-01-17 00:03:21.974 [INFO][5438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6" Namespace="kube-system" Pod="coredns-668d6bf9bc-p9b6s" WorkloadEndpoint="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:22.027454 containerd[2019]: time="2026-01-17T00:03:22.026733796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:22.028143 containerd[2019]: time="2026-01-17T00:03:22.028022884Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:22.028143 containerd[2019]: time="2026-01-17T00:03:22.028081180Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:22.032258 containerd[2019]: time="2026-01-17T00:03:22.028830472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:22.067341 kubelet[3407]: E0117 00:03:22.065499 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:22.067341 kubelet[3407]: E0117 00:03:22.065548 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:22.114854 systemd[1]: Started cri-containerd-e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6.scope - libcontainer container e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6. Jan 17 00:03:22.162416 systemd-networkd[1941]: cali2325dc0e017: Gained IPv6LL Jan 17 00:03:22.239361 containerd[2019]: time="2026-01-17T00:03:22.239073389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p9b6s,Uid:100157b3-6e13-496d-9d2a-b11a40a79c18,Namespace:kube-system,Attempt:1,} returns sandbox id \"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6\"" Jan 17 00:03:22.254685 containerd[2019]: time="2026-01-17T00:03:22.254449829Z" level=info msg="CreateContainer within sandbox \"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:03:22.349898 containerd[2019]: time="2026-01-17T00:03:22.349839833Z" level=info msg="CreateContainer within sandbox \"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c0c16a2f4499145c13f1e1b5b2916e2debf1e5e782ee3ac8ffaee5f22fef90d2\"" Jan 17 00:03:22.356608 containerd[2019]: time="2026-01-17T00:03:22.356529557Z" level=info msg="StartContainer for \"c0c16a2f4499145c13f1e1b5b2916e2debf1e5e782ee3ac8ffaee5f22fef90d2\"" Jan 17 00:03:22.436564 systemd[1]: Started cri-containerd-c0c16a2f4499145c13f1e1b5b2916e2debf1e5e782ee3ac8ffaee5f22fef90d2.scope - libcontainer container c0c16a2f4499145c13f1e1b5b2916e2debf1e5e782ee3ac8ffaee5f22fef90d2. Jan 17 00:03:22.526334 containerd[2019]: time="2026-01-17T00:03:22.524394438Z" level=info msg="StartContainer for \"c0c16a2f4499145c13f1e1b5b2916e2debf1e5e782ee3ac8ffaee5f22fef90d2\" returns successfully" Jan 17 00:03:22.528120 containerd[2019]: time="2026-01-17T00:03:22.527898426Z" level=info msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.677 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.677 [INFO][5554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" iface="eth0" netns="/var/run/netns/cni-cad8cb68-1752-3ee2-25ff-993c89516afd" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.678 [INFO][5554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" iface="eth0" netns="/var/run/netns/cni-cad8cb68-1752-3ee2-25ff-993c89516afd" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.680 [INFO][5554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" iface="eth0" netns="/var/run/netns/cni-cad8cb68-1752-3ee2-25ff-993c89516afd" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.680 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.680 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.747 [INFO][5563] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.748 [INFO][5563] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.748 [INFO][5563] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.762 [WARNING][5563] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.762 [INFO][5563] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.766 [INFO][5563] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:22.777002 containerd[2019]: 2026-01-17 00:03:22.770 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:22.794547 containerd[2019]: time="2026-01-17T00:03:22.793475443Z" level=info msg="TearDown network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" successfully" Jan 17 00:03:22.794547 containerd[2019]: time="2026-01-17T00:03:22.793526323Z" level=info msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" returns successfully" Jan 17 00:03:22.797959 containerd[2019]: time="2026-01-17T00:03:22.796591795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl689,Uid:0e8ea394-25e8-46d5-8e69-e40f87a471c2,Namespace:calico-system,Attempt:1,}" Jan 17 00:03:22.814126 systemd[1]: run-netns-cni\x2dcad8cb68\x2d1752\x2d3ee2\x2d25ff\x2d993c89516afd.mount: Deactivated successfully. Jan 17 00:03:22.930809 systemd-networkd[1941]: cali58415f027bb: Gained IPv6LL Jan 17 00:03:23.065987 kubelet[3407]: E0117 00:03:23.065234 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:23.097246 kubelet[3407]: I0117 00:03:23.096291 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p9b6s" podStartSLOduration=55.096264425 podStartE2EDuration="55.096264425s" podCreationTimestamp="2026-01-17 00:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:03:23.092161493 +0000 UTC m=+59.815072366" watchObservedRunningTime="2026-01-17 00:03:23.096264425 +0000 UTC m=+59.819175298" Jan 17 00:03:23.156053 systemd-networkd[1941]: cali0be3f450cbe: Link UP Jan 17 00:03:23.156879 systemd-networkd[1941]: cali0be3f450cbe: Gained carrier Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:22.955 [INFO][5571] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0 csi-node-driver- calico-system 0e8ea394-25e8-46d5-8e69-e40f87a471c2 1069 0 2026-01-17 00:02:56 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-30-130 csi-node-driver-jl689 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali0be3f450cbe [] [] }} ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:22.955 [INFO][5571] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.011 [INFO][5582] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" HandleID="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.011 [INFO][5582] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" HandleID="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-30-130", "pod":"csi-node-driver-jl689", "timestamp":"2026-01-17 00:03:23.011560756 +0000 UTC"}, Hostname:"ip-172-31-30-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.012 [INFO][5582] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.012 [INFO][5582] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.012 [INFO][5582] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-30-130' Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.034 [INFO][5582] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.042 [INFO][5582] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.057 [INFO][5582] ipam/ipam.go 511: Trying affinity for 192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.068 [INFO][5582] ipam/ipam.go 158: Attempting to load block cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.079 [INFO][5582] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.121.64/26 host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.079 [INFO][5582] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.121.64/26 handle="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.083 [INFO][5582] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8 Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.106 [INFO][5582] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.121.64/26 handle="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.140 [INFO][5582] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.121.72/26] block=192.168.121.64/26 handle="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.140 [INFO][5582] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.121.72/26] handle="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" host="ip-172-31-30-130" Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.140 [INFO][5582] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:23.224709 containerd[2019]: 2026-01-17 00:03:23.140 [INFO][5582] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.121.72/26] IPv6=[] ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" HandleID="k8s-pod-network.7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.146 [INFO][5571] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8ea394-25e8-46d5-8e69-e40f87a471c2", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"", Pod:"csi-node-driver-jl689", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3f450cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.147 [INFO][5571] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.121.72/32] ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.147 [INFO][5571] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0be3f450cbe ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.155 [INFO][5571] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.159 [INFO][5571] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8ea394-25e8-46d5-8e69-e40f87a471c2", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8", Pod:"csi-node-driver-jl689", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3f450cbe", MAC:"52:25:f9:45:86:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:23.229380 containerd[2019]: 2026-01-17 00:03:23.217 [INFO][5571] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8" Namespace="calico-system" Pod="csi-node-driver-jl689" WorkloadEndpoint="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:23.268275 containerd[2019]: time="2026-01-17T00:03:23.267708978Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:03:23.269923 containerd[2019]: time="2026-01-17T00:03:23.269466894Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:03:23.269923 containerd[2019]: time="2026-01-17T00:03:23.269514438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:23.269923 containerd[2019]: time="2026-01-17T00:03:23.269706054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:03:23.331589 systemd[1]: Started cri-containerd-7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8.scope - libcontainer container 7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8. Jan 17 00:03:23.443493 containerd[2019]: time="2026-01-17T00:03:23.443100331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jl689,Uid:0e8ea394-25e8-46d5-8e69-e40f87a471c2,Namespace:calico-system,Attempt:1,} returns sandbox id \"7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8\"" Jan 17 00:03:23.443818 systemd-networkd[1941]: cali3b16d4e9a34: Gained IPv6LL Jan 17 00:03:23.453879 containerd[2019]: time="2026-01-17T00:03:23.452475643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:23.495997 containerd[2019]: time="2026-01-17T00:03:23.495876439Z" level=info msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.605 [WARNING][5652] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0", Pod:"calico-apiserver-76869b6969-b9cdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3075b529f58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.606 [INFO][5652] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.606 [INFO][5652] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" iface="eth0" netns="" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.606 [INFO][5652] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.606 [INFO][5652] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.655 [INFO][5663] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.658 [INFO][5663] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.658 [INFO][5663] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.681 [WARNING][5663] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.681 [INFO][5663] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.685 [INFO][5663] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:23.696491 containerd[2019]: 2026-01-17 00:03:23.691 [INFO][5652] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.696491 containerd[2019]: time="2026-01-17T00:03:23.695366756Z" level=info msg="TearDown network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" successfully" Jan 17 00:03:23.696491 containerd[2019]: time="2026-01-17T00:03:23.695575928Z" level=info msg="StopPodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" returns successfully" Jan 17 00:03:23.702837 containerd[2019]: time="2026-01-17T00:03:23.698677364Z" level=info msg="RemovePodSandbox for \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" Jan 17 00:03:23.702837 containerd[2019]: time="2026-01-17T00:03:23.698763380Z" level=info msg="Forcibly stopping sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\"" Jan 17 00:03:23.745508 containerd[2019]: time="2026-01-17T00:03:23.745432988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:23.747901 containerd[2019]: time="2026-01-17T00:03:23.747746780Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:23.747901 containerd[2019]: time="2026-01-17T00:03:23.747854624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:23.749220 kubelet[3407]: E0117 00:03:23.748270 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:23.749220 kubelet[3407]: E0117 00:03:23.748339 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:23.752402 kubelet[3407]: E0117 00:03:23.748513 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:23.759695 containerd[2019]: time="2026-01-17T00:03:23.759551468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.830 [WARNING][5677] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"cecd0bc0-a5de-49ac-853f-0e0f9c309bd4", ResourceVersion:"1000", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"9deddc0a8ccf53c9accbc4568ce26489fa50ef97467e450b9df20c49789d29a0", Pod:"calico-apiserver-76869b6969-b9cdk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3075b529f58", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.831 [INFO][5677] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.831 [INFO][5677] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" iface="eth0" netns="" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.831 [INFO][5677] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.831 [INFO][5677] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.905 [INFO][5684] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.906 [INFO][5684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.906 [INFO][5684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.927 [WARNING][5684] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.927 [INFO][5684] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" HandleID="k8s-pod-network.3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--b9cdk-eth0" Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.930 [INFO][5684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:23.940724 containerd[2019]: 2026-01-17 00:03:23.934 [INFO][5677] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50" Jan 17 00:03:23.943675 containerd[2019]: time="2026-01-17T00:03:23.941351241Z" level=info msg="TearDown network for sandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" successfully" Jan 17 00:03:23.958275 containerd[2019]: time="2026-01-17T00:03:23.957907197Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:23.958275 containerd[2019]: time="2026-01-17T00:03:23.958015461Z" level=info msg="RemovePodSandbox \"3d65f706b45dae4ab63916b41cabdaa50a1b0026707cc2f71919c62c59bc4f50\" returns successfully" Jan 17 00:03:23.961833 containerd[2019]: time="2026-01-17T00:03:23.961762077Z" level=info msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" Jan 17 00:03:24.070366 containerd[2019]: time="2026-01-17T00:03:24.069073242Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:24.077104 containerd[2019]: time="2026-01-17T00:03:24.076033722Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:24.077104 containerd[2019]: time="2026-01-17T00:03:24.077074998Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:24.077848 kubelet[3407]: E0117 00:03:24.077518 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:24.077848 kubelet[3407]: E0117 00:03:24.077579 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:24.077848 kubelet[3407]: E0117 00:03:24.077726 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:24.079436 kubelet[3407]: E0117 00:03:24.078973 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.075 [WARNING][5698] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8ea394-25e8-46d5-8e69-e40f87a471c2", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8", Pod:"csi-node-driver-jl689", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3f450cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.080 [INFO][5698] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.080 [INFO][5698] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" iface="eth0" netns="" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.081 [INFO][5698] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.081 [INFO][5698] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.189 [INFO][5706] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.191 [INFO][5706] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.192 [INFO][5706] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.210 [WARNING][5706] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.211 [INFO][5706] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.216 [INFO][5706] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.223387 containerd[2019]: 2026-01-17 00:03:24.219 [INFO][5698] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.223387 containerd[2019]: time="2026-01-17T00:03:24.223074906Z" level=info msg="TearDown network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" successfully" Jan 17 00:03:24.223387 containerd[2019]: time="2026-01-17T00:03:24.223113378Z" level=info msg="StopPodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" returns successfully" Jan 17 00:03:24.226168 containerd[2019]: time="2026-01-17T00:03:24.225296538Z" level=info msg="RemovePodSandbox for \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" Jan 17 00:03:24.226168 containerd[2019]: time="2026-01-17T00:03:24.225347718Z" level=info msg="Forcibly stopping sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\"" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.292 [WARNING][5721] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8ea394-25e8-46d5-8e69-e40f87a471c2", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7b6b597ab6e3631f78171701a51f56bed0c3ce33932cb8b288203ab962f829a8", Pod:"csi-node-driver-jl689", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.121.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali0be3f450cbe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.294 [INFO][5721] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.294 [INFO][5721] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" iface="eth0" netns="" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.294 [INFO][5721] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.294 [INFO][5721] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.340 [INFO][5728] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.340 [INFO][5728] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.340 [INFO][5728] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.354 [WARNING][5728] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.354 [INFO][5728] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" HandleID="k8s-pod-network.a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Workload="ip--172--31--30--130-k8s-csi--node--driver--jl689-eth0" Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.356 [INFO][5728] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.362993 containerd[2019]: 2026-01-17 00:03:24.359 [INFO][5721] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e" Jan 17 00:03:24.363836 containerd[2019]: time="2026-01-17T00:03:24.363040147Z" level=info msg="TearDown network for sandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" successfully" Jan 17 00:03:24.370421 containerd[2019]: time="2026-01-17T00:03:24.370317847Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:24.370539 containerd[2019]: time="2026-01-17T00:03:24.370478923Z" level=info msg="RemovePodSandbox \"a3b0c8262f467e08ce344ae8db464523716a185ca20a493100300e05e29f798e\" returns successfully" Jan 17 00:03:24.371508 containerd[2019]: time="2026-01-17T00:03:24.371449567Z" level=info msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.461 [WARNING][5743] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.461 [INFO][5743] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.461 [INFO][5743] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" iface="eth0" netns="" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.461 [INFO][5743] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.461 [INFO][5743] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.503 [INFO][5752] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.504 [INFO][5752] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.504 [INFO][5752] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.519 [WARNING][5752] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.519 [INFO][5752] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.522 [INFO][5752] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.528836 containerd[2019]: 2026-01-17 00:03:24.525 [INFO][5743] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.530425 containerd[2019]: time="2026-01-17T00:03:24.530364164Z" level=info msg="TearDown network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" successfully" Jan 17 00:03:24.530520 containerd[2019]: time="2026-01-17T00:03:24.530419916Z" level=info msg="StopPodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" returns successfully" Jan 17 00:03:24.532167 containerd[2019]: time="2026-01-17T00:03:24.531878324Z" level=info msg="RemovePodSandbox for \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" Jan 17 00:03:24.532167 containerd[2019]: time="2026-01-17T00:03:24.532022384Z" level=info msg="Forcibly stopping sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\"" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.595 [WARNING][5766] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" WorkloadEndpoint="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.596 [INFO][5766] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.596 [INFO][5766] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" iface="eth0" netns="" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.596 [INFO][5766] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.596 [INFO][5766] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.636 [INFO][5773] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.636 [INFO][5773] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.637 [INFO][5773] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.649 [WARNING][5773] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.649 [INFO][5773] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" HandleID="k8s-pod-network.eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Workload="ip--172--31--30--130-k8s-whisker--75b94bdbb6--5dd2v-eth0" Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.652 [INFO][5773] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.661353 containerd[2019]: 2026-01-17 00:03:24.658 [INFO][5766] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace" Jan 17 00:03:24.663367 containerd[2019]: time="2026-01-17T00:03:24.663307605Z" level=info msg="TearDown network for sandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" successfully" Jan 17 00:03:24.670701 containerd[2019]: time="2026-01-17T00:03:24.670623405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:24.670837 containerd[2019]: time="2026-01-17T00:03:24.670720401Z" level=info msg="RemovePodSandbox \"eac660046e74720116cc2873af2d4dcb53f5d6d41eecef37a57939b61adb4ace\" returns successfully" Jan 17 00:03:24.671455 containerd[2019]: time="2026-01-17T00:03:24.671392893Z" level=info msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.733 [WARNING][5788] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0", GenerateName:"calico-kube-controllers-67f478bb65-", Namespace:"calico-system", SelfLink:"", UID:"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f478bb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46", Pod:"calico-kube-controllers-67f478bb65-pq6fw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid24b52934d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.733 [INFO][5788] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.733 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" iface="eth0" netns="" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.733 [INFO][5788] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.733 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.775 [INFO][5795] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.775 [INFO][5795] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.775 [INFO][5795] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.791 [WARNING][5795] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.791 [INFO][5795] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.794 [INFO][5795] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.800416 containerd[2019]: 2026-01-17 00:03:24.797 [INFO][5788] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.801262 containerd[2019]: time="2026-01-17T00:03:24.800421165Z" level=info msg="TearDown network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" successfully" Jan 17 00:03:24.801262 containerd[2019]: time="2026-01-17T00:03:24.800483997Z" level=info msg="StopPodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" returns successfully" Jan 17 00:03:24.802832 containerd[2019]: time="2026-01-17T00:03:24.802760049Z" level=info msg="RemovePodSandbox for \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" Jan 17 00:03:24.802832 containerd[2019]: time="2026-01-17T00:03:24.802822521Z" level=info msg="Forcibly stopping sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\"" Jan 17 00:03:24.850740 systemd-networkd[1941]: cali0be3f450cbe: Gained IPv6LL Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.870 [WARNING][5810] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0", GenerateName:"calico-kube-controllers-67f478bb65-", Namespace:"calico-system", SelfLink:"", UID:"b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f478bb65", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4f0483846e458452ad6d1e98d441e39344ebaf7bc2dd00fb5b4794789cb5dc46", Pod:"calico-kube-controllers-67f478bb65-pq6fw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.121.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid24b52934d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.870 [INFO][5810] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.870 [INFO][5810] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" iface="eth0" netns="" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.870 [INFO][5810] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.870 [INFO][5810] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.909 [INFO][5817] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.909 [INFO][5817] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.909 [INFO][5817] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.922 [WARNING][5817] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.922 [INFO][5817] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" HandleID="k8s-pod-network.cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Workload="ip--172--31--30--130-k8s-calico--kube--controllers--67f478bb65--pq6fw-eth0" Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.924 [INFO][5817] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:24.932248 containerd[2019]: 2026-01-17 00:03:24.928 [INFO][5810] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902" Jan 17 00:03:24.932248 containerd[2019]: time="2026-01-17T00:03:24.931926478Z" level=info msg="TearDown network for sandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" successfully" Jan 17 00:03:24.939194 containerd[2019]: time="2026-01-17T00:03:24.939111970Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:24.939371 containerd[2019]: time="2026-01-17T00:03:24.939239830Z" level=info msg="RemovePodSandbox \"cde1e4a1a64bd9b7f02c28e0e88e2bdd771f8282844eec88e470197c288f2902\" returns successfully" Jan 17 00:03:24.940591 containerd[2019]: time="2026-01-17T00:03:24.940433734Z" level=info msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.006 [WARNING][5831] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2760346a-cdd2-4959-9cca-5bf87123f24a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9", Pod:"goldmane-666569f655-dfrr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2325dc0e017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.006 [INFO][5831] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.006 [INFO][5831] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" iface="eth0" netns="" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.006 [INFO][5831] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.006 [INFO][5831] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.047 [INFO][5838] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.047 [INFO][5838] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.047 [INFO][5838] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.060 [WARNING][5838] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.060 [INFO][5838] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.063 [INFO][5838] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.070919 containerd[2019]: 2026-01-17 00:03:25.066 [INFO][5831] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.073155 containerd[2019]: time="2026-01-17T00:03:25.070990855Z" level=info msg="TearDown network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" successfully" Jan 17 00:03:25.073155 containerd[2019]: time="2026-01-17T00:03:25.071053855Z" level=info msg="StopPodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" returns successfully" Jan 17 00:03:25.073155 containerd[2019]: time="2026-01-17T00:03:25.071734039Z" level=info msg="RemovePodSandbox for \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" Jan 17 00:03:25.073155 containerd[2019]: time="2026-01-17T00:03:25.071789911Z" level=info msg="Forcibly stopping sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\"" Jan 17 00:03:25.115333 kubelet[3407]: E0117 00:03:25.114244 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.172 [WARNING][5852] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"2760346a-cdd2-4959-9cca-5bf87123f24a", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"4e96b532ab7e21f612c5e741042777cf3a0d9d965589d2581f0608e2356f37f9", Pod:"goldmane-666569f655-dfrr7", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.121.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali2325dc0e017", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.173 [INFO][5852] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.173 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" iface="eth0" netns="" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.173 [INFO][5852] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.173 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.225 [INFO][5859] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.226 [INFO][5859] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.226 [INFO][5859] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.240 [WARNING][5859] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.240 [INFO][5859] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" HandleID="k8s-pod-network.b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Workload="ip--172--31--30--130-k8s-goldmane--666569f655--dfrr7-eth0" Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.242 [INFO][5859] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.248554 containerd[2019]: 2026-01-17 00:03:25.245 [INFO][5852] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd" Jan 17 00:03:25.250081 containerd[2019]: time="2026-01-17T00:03:25.248591372Z" level=info msg="TearDown network for sandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" successfully" Jan 17 00:03:25.255100 containerd[2019]: time="2026-01-17T00:03:25.255034832Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:25.255285 containerd[2019]: time="2026-01-17T00:03:25.255132524Z" level=info msg="RemovePodSandbox \"b1b2ecc85f65c286d306733a09fb4bd8d211ee6fe948067bac509eaa702efddd\" returns successfully" Jan 17 00:03:25.255965 containerd[2019]: time="2026-01-17T00:03:25.255923060Z" level=info msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.324 [WARNING][5873] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"98e123b4-3ef3-4dbb-b304-2875273a6844", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26", Pod:"calico-apiserver-76869b6969-4smvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58415f027bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.325 [INFO][5873] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.325 [INFO][5873] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" iface="eth0" netns="" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.325 [INFO][5873] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.325 [INFO][5873] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.368 [INFO][5881] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.369 [INFO][5881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.369 [INFO][5881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.382 [WARNING][5881] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.383 [INFO][5881] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.385 [INFO][5881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.393043 containerd[2019]: 2026-01-17 00:03:25.389 [INFO][5873] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.393043 containerd[2019]: time="2026-01-17T00:03:25.392865500Z" level=info msg="TearDown network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" successfully" Jan 17 00:03:25.396583 containerd[2019]: time="2026-01-17T00:03:25.393830300Z" level=info msg="StopPodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" returns successfully" Jan 17 00:03:25.396583 containerd[2019]: time="2026-01-17T00:03:25.395462384Z" level=info msg="RemovePodSandbox for \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" Jan 17 00:03:25.396583 containerd[2019]: time="2026-01-17T00:03:25.395534192Z" level=info msg="Forcibly stopping sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\"" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.465 [WARNING][5895] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0", GenerateName:"calico-apiserver-76869b6969-", Namespace:"calico-apiserver", SelfLink:"", UID:"98e123b4-3ef3-4dbb-b304-2875273a6844", ResourceVersion:"1082", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 41, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"76869b6969", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"7c3ab33bcef4d8f2bb217883165252ccd9350dcad3d677e7c714e0b849c9dc26", Pod:"calico-apiserver-76869b6969-4smvz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.121.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali58415f027bb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.466 [INFO][5895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.466 [INFO][5895] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" iface="eth0" netns="" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.466 [INFO][5895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.466 [INFO][5895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.516 [INFO][5902] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.516 [INFO][5902] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.517 [INFO][5902] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.532 [WARNING][5902] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.532 [INFO][5902] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" HandleID="k8s-pod-network.68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Workload="ip--172--31--30--130-k8s-calico--apiserver--76869b6969--4smvz-eth0" Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.536 [INFO][5902] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.543357 containerd[2019]: 2026-01-17 00:03:25.539 [INFO][5895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543" Jan 17 00:03:25.544676 containerd[2019]: time="2026-01-17T00:03:25.543398205Z" level=info msg="TearDown network for sandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" successfully" Jan 17 00:03:25.550924 containerd[2019]: time="2026-01-17T00:03:25.550852137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:25.551095 containerd[2019]: time="2026-01-17T00:03:25.550950213Z" level=info msg="RemovePodSandbox \"68706873fec676d65b3e448dbc552534316f623ad265bffdfafadb13989bb543\" returns successfully" Jan 17 00:03:25.552066 containerd[2019]: time="2026-01-17T00:03:25.551615253Z" level=info msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.614 [WARNING][5917] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100157b3-6e13-496d-9d2a-b11a40a79c18", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6", Pod:"coredns-668d6bf9bc-p9b6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b16d4e9a34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.614 [INFO][5917] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.614 [INFO][5917] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" iface="eth0" netns="" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.614 [INFO][5917] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.614 [INFO][5917] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.655 [INFO][5924] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.656 [INFO][5924] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.656 [INFO][5924] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.676 [WARNING][5924] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.676 [INFO][5924] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.679 [INFO][5924] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.686801 containerd[2019]: 2026-01-17 00:03:25.682 [INFO][5917] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.687724 containerd[2019]: time="2026-01-17T00:03:25.687333826Z" level=info msg="TearDown network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" successfully" Jan 17 00:03:25.687724 containerd[2019]: time="2026-01-17T00:03:25.687376030Z" level=info msg="StopPodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" returns successfully" Jan 17 00:03:25.688547 containerd[2019]: time="2026-01-17T00:03:25.687966718Z" level=info msg="RemovePodSandbox for \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" Jan 17 00:03:25.688547 containerd[2019]: time="2026-01-17T00:03:25.688012186Z" level=info msg="Forcibly stopping sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\"" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.754 [WARNING][5939] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"100157b3-6e13-496d-9d2a-b11a40a79c18", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"e1e454b69de734a2021995bea1734276ba5f962b8d98edef8abcba4b6890cac6", Pod:"coredns-668d6bf9bc-p9b6s", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3b16d4e9a34", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.756 [INFO][5939] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.756 [INFO][5939] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" iface="eth0" netns="" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.756 [INFO][5939] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.756 [INFO][5939] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.796 [INFO][5946] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.797 [INFO][5946] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.797 [INFO][5946] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.810 [WARNING][5946] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.810 [INFO][5946] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" HandleID="k8s-pod-network.feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--p9b6s-eth0" Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.812 [INFO][5946] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.819417 containerd[2019]: 2026-01-17 00:03:25.815 [INFO][5939] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b" Jan 17 00:03:25.821309 containerd[2019]: time="2026-01-17T00:03:25.820306090Z" level=info msg="TearDown network for sandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" successfully" Jan 17 00:03:25.826894 containerd[2019]: time="2026-01-17T00:03:25.826822030Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:25.827402 containerd[2019]: time="2026-01-17T00:03:25.826919458Z" level=info msg="RemovePodSandbox \"feee2dab4833dc07d755ecff13bf3fea748763f8cfcc0abbf44798541887614b\" returns successfully" Jan 17 00:03:25.828136 containerd[2019]: time="2026-01-17T00:03:25.827689294Z" level=info msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.907 [WARNING][5960] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316", Pod:"coredns-668d6bf9bc-982qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali388d096fd98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.907 [INFO][5960] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.907 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" iface="eth0" netns="" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.907 [INFO][5960] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.907 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.947 [INFO][5967] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.947 [INFO][5967] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.947 [INFO][5967] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.959 [WARNING][5967] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.959 [INFO][5967] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.962 [INFO][5967] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:25.969122 containerd[2019]: 2026-01-17 00:03:25.966 [INFO][5960] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:25.969122 containerd[2019]: time="2026-01-17T00:03:25.969066539Z" level=info msg="TearDown network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" successfully" Jan 17 00:03:25.970924 containerd[2019]: time="2026-01-17T00:03:25.969121175Z" level=info msg="StopPodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" returns successfully" Jan 17 00:03:25.971092 containerd[2019]: time="2026-01-17T00:03:25.970996175Z" level=info msg="RemovePodSandbox for \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" Jan 17 00:03:25.971260 containerd[2019]: time="2026-01-17T00:03:25.971084075Z" level=info msg="Forcibly stopping sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\"" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.038 [WARNING][5981] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"3bc218c2-324a-4549-a1e2-ab9fb6d1d96d", ResourceVersion:"1053", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 2, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-30-130", ContainerID:"d0b930fef60f3205c48320a74d58a78d229903e07dde9bfbd43cd7ad0ff30316", Pod:"coredns-668d6bf9bc-982qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.121.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali388d096fd98", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.041 [INFO][5981] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.041 [INFO][5981] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" iface="eth0" netns="" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.042 [INFO][5981] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.042 [INFO][5981] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.097 [INFO][5989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.097 [INFO][5989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.097 [INFO][5989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.111 [WARNING][5989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.111 [INFO][5989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" HandleID="k8s-pod-network.6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Workload="ip--172--31--30--130-k8s-coredns--668d6bf9bc--982qf-eth0" Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.113 [INFO][5989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:03:26.123143 containerd[2019]: 2026-01-17 00:03:26.117 [INFO][5981] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22" Jan 17 00:03:26.124516 containerd[2019]: time="2026-01-17T00:03:26.123237032Z" level=info msg="TearDown network for sandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" successfully" Jan 17 00:03:26.132754 containerd[2019]: time="2026-01-17T00:03:26.130827596Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:03:26.132754 containerd[2019]: time="2026-01-17T00:03:26.130920332Z" level=info msg="RemovePodSandbox \"6f48f8d78a7ec41cd4ffb3a05813415205b3d3a2f0c368e9ab8eaa1b24f42f22\" returns successfully" Jan 17 00:03:26.137219 kubelet[3407]: E0117 00:03:26.136717 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:27.166982 ntpd[1994]: Listen normally on 8 vxlan.calico 192.168.121.64:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 8 vxlan.calico 192.168.121.64:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 9 cali87648f25ba6 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 10 vxlan.calico [fe80::649b:feff:fe05:4edf%5]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 11 cali3075b529f58 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 12 calid24b52934d7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 13 cali388d096fd98 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 14 cali2325dc0e017 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 15 cali58415f027bb [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:03:27.167775 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 16 cali3b16d4e9a34 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:03:27.167130 ntpd[1994]: Listen normally on 9 cali87648f25ba6 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:03:27.168367 ntpd[1994]: 17 Jan 00:03:27 ntpd[1994]: Listen normally on 17 cali0be3f450cbe [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:03:27.167343 ntpd[1994]: Listen normally on 10 vxlan.calico [fe80::649b:feff:fe05:4edf%5]:123 Jan 17 00:03:27.167423 ntpd[1994]: Listen normally on 11 cali3075b529f58 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 17 00:03:27.167495 ntpd[1994]: Listen normally on 12 calid24b52934d7 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 17 00:03:27.167565 ntpd[1994]: Listen normally on 13 cali388d096fd98 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 17 00:03:27.167633 ntpd[1994]: Listen normally on 14 cali2325dc0e017 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:03:27.167703 ntpd[1994]: Listen normally on 15 cali58415f027bb [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:03:27.167775 ntpd[1994]: Listen normally on 16 cali3b16d4e9a34 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:03:27.167843 ntpd[1994]: Listen normally on 17 cali0be3f450cbe [fe80::ecee:eeff:feee:eeee%14]:123 Jan 17 00:03:30.511252 containerd[2019]: time="2026-01-17T00:03:30.511083734Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:30.777125 containerd[2019]: time="2026-01-17T00:03:30.776840571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:30.779300 containerd[2019]: time="2026-01-17T00:03:30.779051823Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:30.779300 containerd[2019]: time="2026-01-17T00:03:30.779167851Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:30.780639 kubelet[3407]: E0117 00:03:30.779629 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:30.780639 kubelet[3407]: E0117 00:03:30.779690 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:30.780639 kubelet[3407]: E0117 00:03:30.779864 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfn9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:30.782016 kubelet[3407]: E0117 00:03:30.781361 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:30.811778 systemd[1]: Started sshd@7-172.31.30.130:22-68.220.241.50:41666.service - OpenSSH per-connection server daemon (68.220.241.50:41666). Jan 17 00:03:31.358894 sshd[6010]: Accepted publickey for core from 68.220.241.50 port 41666 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:31.362536 sshd[6010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:31.370882 systemd-logind[2000]: New session 8 of user core. Jan 17 00:03:31.380499 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:03:31.894623 sshd[6010]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:31.902673 systemd[1]: sshd@7-172.31.30.130:22-68.220.241.50:41666.service: Deactivated successfully. Jan 17 00:03:31.907441 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:03:31.910525 systemd-logind[2000]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:03:31.914587 systemd-logind[2000]: Removed session 8. Jan 17 00:03:32.512123 containerd[2019]: time="2026-01-17T00:03:32.511274080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:32.802298 containerd[2019]: time="2026-01-17T00:03:32.801251897Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:32.803671 containerd[2019]: time="2026-01-17T00:03:32.803583485Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:32.803881 containerd[2019]: time="2026-01-17T00:03:32.803734565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:32.804040 kubelet[3407]: E0117 00:03:32.803915 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:32.804040 kubelet[3407]: E0117 00:03:32.803973 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:32.806848 kubelet[3407]: E0117 00:03:32.804156 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmds6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:32.806848 kubelet[3407]: E0117 00:03:32.806401 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:33.512847 containerd[2019]: time="2026-01-17T00:03:33.512635577Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:03:33.773548 containerd[2019]: time="2026-01-17T00:03:33.773374950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:33.775608 containerd[2019]: time="2026-01-17T00:03:33.775534734Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:03:33.775862 containerd[2019]: time="2026-01-17T00:03:33.775681590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:03:33.775923 kubelet[3407]: E0117 00:03:33.775871 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:33.776085 kubelet[3407]: E0117 00:03:33.776014 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:03:33.776836 kubelet[3407]: E0117 00:03:33.776366 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6a0029da1bf5401b94096282928a075e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:33.779446 containerd[2019]: time="2026-01-17T00:03:33.779363754Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:03:34.064715 containerd[2019]: time="2026-01-17T00:03:34.064584423Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:34.066684 containerd[2019]: time="2026-01-17T00:03:34.066622995Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:03:34.066800 containerd[2019]: time="2026-01-17T00:03:34.066768471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:34.067106 kubelet[3407]: E0117 00:03:34.067050 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:34.067678 kubelet[3407]: E0117 00:03:34.067119 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:03:34.067678 kubelet[3407]: E0117 00:03:34.067292 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:34.068854 kubelet[3407]: E0117 00:03:34.068778 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:03:34.512381 containerd[2019]: time="2026-01-17T00:03:34.511761282Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:34.774089 containerd[2019]: time="2026-01-17T00:03:34.773761327Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:34.776106 containerd[2019]: time="2026-01-17T00:03:34.775955071Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:34.776106 containerd[2019]: time="2026-01-17T00:03:34.776037067Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:34.776676 kubelet[3407]: E0117 00:03:34.776600 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:34.776789 kubelet[3407]: E0117 00:03:34.776676 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:34.777251 kubelet[3407]: E0117 00:03:34.776862 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t94sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:34.778385 kubelet[3407]: E0117 00:03:34.778290 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:36.511537 containerd[2019]: time="2026-01-17T00:03:36.511467283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:36.826867 containerd[2019]: time="2026-01-17T00:03:36.826744641Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:36.829099 containerd[2019]: time="2026-01-17T00:03:36.829011861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:36.829237 containerd[2019]: time="2026-01-17T00:03:36.829168497Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:36.829868 kubelet[3407]: E0117 00:03:36.829533 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:36.829868 kubelet[3407]: E0117 00:03:36.829597 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:36.829868 kubelet[3407]: E0117 00:03:36.829774 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:36.832258 kubelet[3407]: E0117 00:03:36.831791 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:36.988883 systemd[1]: Started sshd@8-172.31.30.130:22-68.220.241.50:54682.service - OpenSSH per-connection server daemon (68.220.241.50:54682). Jan 17 00:03:37.496259 sshd[6025]: Accepted publickey for core from 68.220.241.50 port 54682 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:37.500768 sshd[6025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:37.517976 systemd-logind[2000]: New session 9 of user core. Jan 17 00:03:37.527609 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:03:37.987622 sshd[6025]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:37.993946 systemd[1]: sshd@8-172.31.30.130:22-68.220.241.50:54682.service: Deactivated successfully. Jan 17 00:03:37.999838 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:03:38.001808 systemd-logind[2000]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:03:38.004573 systemd-logind[2000]: Removed session 9. Jan 17 00:03:39.514086 containerd[2019]: time="2026-01-17T00:03:39.513981034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:03:39.799638 containerd[2019]: time="2026-01-17T00:03:39.799182312Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:39.801800 containerd[2019]: time="2026-01-17T00:03:39.801652632Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:03:39.801800 containerd[2019]: time="2026-01-17T00:03:39.801755520Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:03:39.802098 kubelet[3407]: E0117 00:03:39.801948 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:39.802098 kubelet[3407]: E0117 00:03:39.802016 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:03:39.802787 kubelet[3407]: E0117 00:03:39.802179 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:39.805800 containerd[2019]: time="2026-01-17T00:03:39.805746564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:03:40.234142 containerd[2019]: time="2026-01-17T00:03:40.233933242Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:40.236193 containerd[2019]: time="2026-01-17T00:03:40.236055526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:03:40.236193 containerd[2019]: time="2026-01-17T00:03:40.236139022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:03:40.236425 kubelet[3407]: E0117 00:03:40.236355 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:40.236505 kubelet[3407]: E0117 00:03:40.236418 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:03:40.236654 kubelet[3407]: E0117 00:03:40.236570 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:40.238512 kubelet[3407]: E0117 00:03:40.238419 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:42.512638 kubelet[3407]: E0117 00:03:42.512565 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:43.099744 systemd[1]: Started sshd@9-172.31.30.130:22-68.220.241.50:47650.service - OpenSSH per-connection server daemon (68.220.241.50:47650). Jan 17 00:03:43.516153 kubelet[3407]: E0117 00:03:43.515517 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:43.655856 sshd[6046]: Accepted publickey for core from 68.220.241.50 port 47650 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:43.657694 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:43.670696 systemd-logind[2000]: New session 10 of user core. Jan 17 00:03:43.680294 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:03:44.162703 sshd[6046]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:44.169657 systemd[1]: sshd@9-172.31.30.130:22-68.220.241.50:47650.service: Deactivated successfully. Jan 17 00:03:44.175632 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:03:44.178508 systemd-logind[2000]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:03:44.180259 systemd-logind[2000]: Removed session 10. Jan 17 00:03:44.270673 systemd[1]: Started sshd@10-172.31.30.130:22-68.220.241.50:47664.service - OpenSSH per-connection server daemon (68.220.241.50:47664). Jan 17 00:03:44.829519 sshd[6061]: Accepted publickey for core from 68.220.241.50 port 47664 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:44.832112 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:44.842582 systemd-logind[2000]: New session 11 of user core. Jan 17 00:03:44.850479 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:03:45.474447 sshd[6061]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:45.481485 systemd[1]: sshd@10-172.31.30.130:22-68.220.241.50:47664.service: Deactivated successfully. Jan 17 00:03:45.487908 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:03:45.490082 systemd-logind[2000]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:03:45.492337 systemd-logind[2000]: Removed session 11. Jan 17 00:03:45.570843 systemd[1]: Started sshd@11-172.31.30.130:22-68.220.241.50:47678.service - OpenSSH per-connection server daemon (68.220.241.50:47678). Jan 17 00:03:46.091002 sshd[6093]: Accepted publickey for core from 68.220.241.50 port 47678 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:46.092835 sshd[6093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:46.107403 systemd-logind[2000]: New session 12 of user core. Jan 17 00:03:46.114533 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:03:46.514044 kubelet[3407]: E0117 00:03:46.513858 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:03:46.633014 sshd[6093]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:46.638602 systemd-logind[2000]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:03:46.639972 systemd[1]: sshd@11-172.31.30.130:22-68.220.241.50:47678.service: Deactivated successfully. Jan 17 00:03:46.645040 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:03:46.650110 systemd-logind[2000]: Removed session 12. Jan 17 00:03:47.516153 kubelet[3407]: E0117 00:03:47.513373 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:51.514422 kubelet[3407]: E0117 00:03:51.514303 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:03:51.745771 systemd[1]: Started sshd@12-172.31.30.130:22-68.220.241.50:47686.service - OpenSSH per-connection server daemon (68.220.241.50:47686). Jan 17 00:03:52.296870 sshd[6116]: Accepted publickey for core from 68.220.241.50 port 47686 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:52.299763 sshd[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:52.307667 systemd-logind[2000]: New session 13 of user core. Jan 17 00:03:52.315536 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:03:52.833873 sshd[6116]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:52.839838 systemd[1]: sshd@12-172.31.30.130:22-68.220.241.50:47686.service: Deactivated successfully. Jan 17 00:03:52.845274 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:03:52.849472 systemd-logind[2000]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:03:52.851816 systemd-logind[2000]: Removed session 13. Jan 17 00:03:55.514634 containerd[2019]: time="2026-01-17T00:03:55.514359638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:03:55.518714 kubelet[3407]: E0117 00:03:55.516529 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:03:55.773870 containerd[2019]: time="2026-01-17T00:03:55.773685123Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:55.776008 containerd[2019]: time="2026-01-17T00:03:55.775931679Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:03:55.776171 containerd[2019]: time="2026-01-17T00:03:55.776079903Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:03:55.776524 kubelet[3407]: E0117 00:03:55.776442 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:55.776663 kubelet[3407]: E0117 00:03:55.776526 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:03:55.776841 kubelet[3407]: E0117 00:03:55.776741 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmds6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:55.778181 kubelet[3407]: E0117 00:03:55.778104 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:03:56.512551 containerd[2019]: time="2026-01-17T00:03:56.512446035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:03:56.797194 containerd[2019]: time="2026-01-17T00:03:56.796991680Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:56.799639 containerd[2019]: time="2026-01-17T00:03:56.799474132Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:03:56.799639 containerd[2019]: time="2026-01-17T00:03:56.799586908Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:56.799891 kubelet[3407]: E0117 00:03:56.799800 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:56.799891 kubelet[3407]: E0117 00:03:56.799862 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:03:56.800734 kubelet[3407]: E0117 00:03:56.800176 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfn9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:56.802496 kubelet[3407]: E0117 00:03:56.802417 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:03:57.936767 systemd[1]: Started sshd@13-172.31.30.130:22-68.220.241.50:57554.service - OpenSSH per-connection server daemon (68.220.241.50:57554). Jan 17 00:03:58.496839 sshd[6129]: Accepted publickey for core from 68.220.241.50 port 57554 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:03:58.502065 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:03:58.517847 containerd[2019]: time="2026-01-17T00:03:58.516306953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:03:58.530309 systemd-logind[2000]: New session 14 of user core. Jan 17 00:03:58.540551 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:03:58.813778 containerd[2019]: time="2026-01-17T00:03:58.813702090Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:03:58.819512 containerd[2019]: time="2026-01-17T00:03:58.819417318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:03:58.819686 containerd[2019]: time="2026-01-17T00:03:58.819590682Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:03:58.820048 kubelet[3407]: E0117 00:03:58.819932 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:58.820706 kubelet[3407]: E0117 00:03:58.820063 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:03:58.822137 kubelet[3407]: E0117 00:03:58.820989 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t94sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:03:58.823939 kubelet[3407]: E0117 00:03:58.822573 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:03:59.055975 sshd[6129]: pam_unix(sshd:session): session closed for user core Jan 17 00:03:59.067175 systemd[1]: sshd@13-172.31.30.130:22-68.220.241.50:57554.service: Deactivated successfully. Jan 17 00:03:59.074888 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:03:59.077948 systemd-logind[2000]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:03:59.080687 systemd-logind[2000]: Removed session 14. Jan 17 00:04:00.512964 containerd[2019]: time="2026-01-17T00:04:00.512509483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:04:00.801721 containerd[2019]: time="2026-01-17T00:04:00.801418544Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:00.803807 containerd[2019]: time="2026-01-17T00:04:00.803690792Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:04:00.804027 containerd[2019]: time="2026-01-17T00:04:00.803775056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:04:00.804346 kubelet[3407]: E0117 00:04:00.804272 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:00.804981 kubelet[3407]: E0117 00:04:00.804343 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:00.804981 kubelet[3407]: E0117 00:04:00.804506 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6a0029da1bf5401b94096282928a075e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:00.808994 containerd[2019]: time="2026-01-17T00:04:00.808894040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:04:01.060950 containerd[2019]: time="2026-01-17T00:04:01.060867173Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:01.063254 containerd[2019]: time="2026-01-17T00:04:01.063135737Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:04:01.063584 kubelet[3407]: E0117 00:04:01.063470 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:01.063584 kubelet[3407]: E0117 00:04:01.063543 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:01.063929 containerd[2019]: time="2026-01-17T00:04:01.063484313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:04:01.066833 kubelet[3407]: E0117 00:04:01.063711 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:01.077420 kubelet[3407]: E0117 00:04:01.077353 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:04:04.163964 systemd[1]: Started sshd@14-172.31.30.130:22-68.220.241.50:37728.service - OpenSSH per-connection server daemon (68.220.241.50:37728). Jan 17 00:04:04.512723 containerd[2019]: time="2026-01-17T00:04:04.512418131Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:04.711612 sshd[6152]: Accepted publickey for core from 68.220.241.50 port 37728 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:04.714134 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:04.725342 systemd-logind[2000]: New session 15 of user core. Jan 17 00:04:04.732504 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:04:04.801657 containerd[2019]: time="2026-01-17T00:04:04.801469368Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:04.803827 containerd[2019]: time="2026-01-17T00:04:04.803714976Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:04.803980 containerd[2019]: time="2026-01-17T00:04:04.803888724Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:04.804223 kubelet[3407]: E0117 00:04:04.804142 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:04.804997 kubelet[3407]: E0117 00:04:04.804244 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:04.804997 kubelet[3407]: E0117 00:04:04.804439 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:04.805967 kubelet[3407]: E0117 00:04:04.805761 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:04:05.232807 sshd[6152]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:05.238436 systemd-logind[2000]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:04:05.240126 systemd[1]: sshd@14-172.31.30.130:22-68.220.241.50:37728.service: Deactivated successfully. Jan 17 00:04:05.244622 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:04:05.251956 systemd-logind[2000]: Removed session 15. Jan 17 00:04:09.531118 kubelet[3407]: E0117 00:04:09.530515 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:04:09.535951 kubelet[3407]: E0117 00:04:09.532745 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:04:10.336957 systemd[1]: Started sshd@15-172.31.30.130:22-68.220.241.50:37744.service - OpenSSH per-connection server daemon (68.220.241.50:37744). Jan 17 00:04:10.515305 containerd[2019]: time="2026-01-17T00:04:10.514038940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:04:10.800281 containerd[2019]: time="2026-01-17T00:04:10.799833222Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:10.802085 containerd[2019]: time="2026-01-17T00:04:10.801929826Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:04:10.802085 containerd[2019]: time="2026-01-17T00:04:10.802042710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:04:10.802352 kubelet[3407]: E0117 00:04:10.802268 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:10.802352 kubelet[3407]: E0117 00:04:10.802333 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:10.803013 kubelet[3407]: E0117 00:04:10.802501 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:10.810846 containerd[2019]: time="2026-01-17T00:04:10.809292882Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:04:10.906283 sshd[6165]: Accepted publickey for core from 68.220.241.50 port 37744 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:10.909331 sshd[6165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:10.919680 systemd-logind[2000]: New session 16 of user core. Jan 17 00:04:10.926561 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:04:11.102027 containerd[2019]: time="2026-01-17T00:04:11.101798931Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:11.104264 containerd[2019]: time="2026-01-17T00:04:11.104036331Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:04:11.104264 containerd[2019]: time="2026-01-17T00:04:11.104123343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:04:11.105421 kubelet[3407]: E0117 00:04:11.104659 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:11.105421 kubelet[3407]: E0117 00:04:11.104738 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:11.105421 kubelet[3407]: E0117 00:04:11.104965 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:11.106357 kubelet[3407]: E0117 00:04:11.106268 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:04:11.416639 sshd[6165]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:11.426313 systemd[1]: sshd@15-172.31.30.130:22-68.220.241.50:37744.service: Deactivated successfully. Jan 17 00:04:11.431495 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:04:11.436570 systemd-logind[2000]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:04:11.439643 systemd-logind[2000]: Removed session 16. Jan 17 00:04:11.510851 systemd[1]: Started sshd@16-172.31.30.130:22-68.220.241.50:37748.service - OpenSSH per-connection server daemon (68.220.241.50:37748). Jan 17 00:04:11.518959 kubelet[3407]: E0117 00:04:11.518871 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:04:12.053886 sshd[6178]: Accepted publickey for core from 68.220.241.50 port 37748 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:12.056934 sshd[6178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:12.067341 systemd-logind[2000]: New session 17 of user core. Jan 17 00:04:12.074925 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:04:12.833863 sshd[6178]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:12.841751 systemd[1]: sshd@16-172.31.30.130:22-68.220.241.50:37748.service: Deactivated successfully. Jan 17 00:04:12.845812 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:04:12.848857 systemd-logind[2000]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:04:12.851869 systemd-logind[2000]: Removed session 17. Jan 17 00:04:12.932693 systemd[1]: Started sshd@17-172.31.30.130:22-68.220.241.50:58292.service - OpenSSH per-connection server daemon (68.220.241.50:58292). Jan 17 00:04:13.457711 sshd[6189]: Accepted publickey for core from 68.220.241.50 port 58292 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:13.461150 sshd[6189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:13.470872 systemd-logind[2000]: New session 18 of user core. Jan 17 00:04:13.482546 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:04:13.518433 kubelet[3407]: E0117 00:04:13.518286 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:04:14.804596 sshd[6189]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:14.814951 systemd[1]: sshd@17-172.31.30.130:22-68.220.241.50:58292.service: Deactivated successfully. Jan 17 00:04:14.820042 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:04:14.825174 systemd-logind[2000]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:04:14.828067 systemd-logind[2000]: Removed session 18. Jan 17 00:04:14.917783 systemd[1]: Started sshd@18-172.31.30.130:22-68.220.241.50:58304.service - OpenSSH per-connection server daemon (68.220.241.50:58304). Jan 17 00:04:15.471441 sshd[6207]: Accepted publickey for core from 68.220.241.50 port 58304 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:15.475529 sshd[6207]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:15.484073 systemd-logind[2000]: New session 19 of user core. Jan 17 00:04:15.489521 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:04:16.254532 sshd[6207]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:16.259946 systemd[1]: sshd@18-172.31.30.130:22-68.220.241.50:58304.service: Deactivated successfully. Jan 17 00:04:16.264102 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:04:16.269487 systemd-logind[2000]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:04:16.273179 systemd-logind[2000]: Removed session 19. Jan 17 00:04:16.358748 systemd[1]: Started sshd@19-172.31.30.130:22-68.220.241.50:58310.service - OpenSSH per-connection server daemon (68.220.241.50:58310). Jan 17 00:04:16.511083 kubelet[3407]: E0117 00:04:16.510900 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:04:16.916132 sshd[6238]: Accepted publickey for core from 68.220.241.50 port 58310 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:16.920525 sshd[6238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:16.930636 systemd-logind[2000]: New session 20 of user core. Jan 17 00:04:16.937529 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:04:17.514020 sshd[6238]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:17.527970 systemd[1]: sshd@19-172.31.30.130:22-68.220.241.50:58310.service: Deactivated successfully. Jan 17 00:04:17.535590 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:04:17.539002 systemd-logind[2000]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:04:17.544431 systemd-logind[2000]: Removed session 20. Jan 17 00:04:20.512189 kubelet[3407]: E0117 00:04:20.512081 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:04:22.516048 kubelet[3407]: E0117 00:04:22.515925 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:04:22.610871 systemd[1]: Started sshd@20-172.31.30.130:22-68.220.241.50:59034.service - OpenSSH per-connection server daemon (68.220.241.50:59034). Jan 17 00:04:23.133303 sshd[6254]: Accepted publickey for core from 68.220.241.50 port 59034 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:23.136368 sshd[6254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:23.144551 systemd-logind[2000]: New session 21 of user core. Jan 17 00:04:23.156528 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:04:23.646685 sshd[6254]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:23.654086 systemd[1]: sshd@20-172.31.30.130:22-68.220.241.50:59034.service: Deactivated successfully. Jan 17 00:04:23.660519 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:04:23.667171 systemd-logind[2000]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:04:23.676156 systemd-logind[2000]: Removed session 21. Jan 17 00:04:24.512382 kubelet[3407]: E0117 00:04:24.512178 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:04:25.514697 kubelet[3407]: E0117 00:04:25.513690 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:04:25.519567 kubelet[3407]: E0117 00:04:25.518476 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:04:28.745978 systemd[1]: Started sshd@21-172.31.30.130:22-68.220.241.50:59036.service - OpenSSH per-connection server daemon (68.220.241.50:59036). Jan 17 00:04:29.260407 sshd[6269]: Accepted publickey for core from 68.220.241.50 port 59036 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:29.263920 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:29.275339 systemd-logind[2000]: New session 22 of user core. Jan 17 00:04:29.283525 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:04:29.786786 sshd[6269]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:29.798901 systemd[1]: sshd@21-172.31.30.130:22-68.220.241.50:59036.service: Deactivated successfully. Jan 17 00:04:29.804938 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:04:29.809933 systemd-logind[2000]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:04:29.812534 systemd-logind[2000]: Removed session 22. Jan 17 00:04:31.521778 kubelet[3407]: E0117 00:04:31.521695 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:04:33.518051 kubelet[3407]: E0117 00:04:33.517547 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:04:34.892108 systemd[1]: Started sshd@22-172.31.30.130:22-68.220.241.50:59532.service - OpenSSH per-connection server daemon (68.220.241.50:59532). Jan 17 00:04:35.403283 sshd[6284]: Accepted publickey for core from 68.220.241.50 port 59532 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:35.405740 sshd[6284]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:35.418584 systemd-logind[2000]: New session 23 of user core. Jan 17 00:04:35.424552 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:04:35.939543 sshd[6284]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:35.949124 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:04:35.953754 systemd[1]: sshd@22-172.31.30.130:22-68.220.241.50:59532.service: Deactivated successfully. Jan 17 00:04:35.961816 systemd-logind[2000]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:04:35.967781 systemd-logind[2000]: Removed session 23. Jan 17 00:04:36.512924 kubelet[3407]: E0117 00:04:36.512467 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:04:37.516917 kubelet[3407]: E0117 00:04:37.514787 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:04:37.520079 kubelet[3407]: E0117 00:04:37.519572 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:04:40.511558 containerd[2019]: time="2026-01-17T00:04:40.511471965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:40.774791 containerd[2019]: time="2026-01-17T00:04:40.774600503Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:40.777135 containerd[2019]: time="2026-01-17T00:04:40.777026891Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:40.777576 containerd[2019]: time="2026-01-17T00:04:40.777524807Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:40.777703 kubelet[3407]: E0117 00:04:40.777646 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:40.778905 kubelet[3407]: E0117 00:04:40.777724 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:40.778905 kubelet[3407]: E0117 00:04:40.777922 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xfn9w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-b9cdk_calico-apiserver(cecd0bc0-a5de-49ac-853f-0e0f9c309bd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:40.779287 kubelet[3407]: E0117 00:04:40.779222 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:04:41.034812 systemd[1]: Started sshd@23-172.31.30.130:22-68.220.241.50:59542.service - OpenSSH per-connection server daemon (68.220.241.50:59542). Jan 17 00:04:41.550892 sshd[6303]: Accepted publickey for core from 68.220.241.50 port 59542 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:41.554046 sshd[6303]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:41.567905 systemd-logind[2000]: New session 24 of user core. Jan 17 00:04:41.574498 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:04:42.078588 sshd[6303]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:42.091083 systemd[1]: sshd@23-172.31.30.130:22-68.220.241.50:59542.service: Deactivated successfully. Jan 17 00:04:42.092028 systemd-logind[2000]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:04:42.103830 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:04:42.110952 systemd-logind[2000]: Removed session 24. Jan 17 00:04:45.513886 containerd[2019]: time="2026-01-17T00:04:45.512974370Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:04:45.774704 containerd[2019]: time="2026-01-17T00:04:45.774355251Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:45.777038 containerd[2019]: time="2026-01-17T00:04:45.776864752Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:04:45.777038 containerd[2019]: time="2026-01-17T00:04:45.776949580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:45.777415 kubelet[3407]: E0117 00:04:45.777301 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:45.777415 kubelet[3407]: E0117 00:04:45.777374 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:04:45.778770 kubelet[3407]: E0117 00:04:45.777555 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lx5xv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-76869b6969-4smvz_calico-apiserver(98e123b4-3ef3-4dbb-b304-2875273a6844): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:45.779384 kubelet[3407]: E0117 00:04:45.779317 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:04:47.193423 systemd[1]: Started sshd@24-172.31.30.130:22-68.220.241.50:54388.service - OpenSSH per-connection server daemon (68.220.241.50:54388). Jan 17 00:04:47.515385 containerd[2019]: time="2026-01-17T00:04:47.514592656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:04:47.763789 sshd[6341]: Accepted publickey for core from 68.220.241.50 port 54388 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:47.764355 containerd[2019]: time="2026-01-17T00:04:47.763915169Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:47.769133 containerd[2019]: time="2026-01-17T00:04:47.767518229Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:04:47.769133 containerd[2019]: time="2026-01-17T00:04:47.767564393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:04:47.769420 kubelet[3407]: E0117 00:04:47.768304 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:04:47.769420 kubelet[3407]: E0117 00:04:47.768368 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:04:47.769420 kubelet[3407]: E0117 00:04:47.768552 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mmds6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-67f478bb65-pq6fw_calico-system(b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:47.770616 kubelet[3407]: E0117 00:04:47.769845 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:04:47.770487 sshd[6341]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:47.783154 systemd-logind[2000]: New session 25 of user core. Jan 17 00:04:47.794955 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 00:04:48.323581 sshd[6341]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:48.332635 systemd[1]: sshd@24-172.31.30.130:22-68.220.241.50:54388.service: Deactivated successfully. Jan 17 00:04:48.339721 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 00:04:48.342357 systemd-logind[2000]: Session 25 logged out. Waiting for processes to exit. Jan 17 00:04:48.346308 systemd-logind[2000]: Removed session 25. Jan 17 00:04:48.511841 containerd[2019]: time="2026-01-17T00:04:48.511404053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 17 00:04:48.775014 containerd[2019]: time="2026-01-17T00:04:48.774461682Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:48.776822 containerd[2019]: time="2026-01-17T00:04:48.776673018Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 17 00:04:48.777002 containerd[2019]: time="2026-01-17T00:04:48.776751918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 17 00:04:48.778000 kubelet[3407]: E0117 00:04:48.777264 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:48.778000 kubelet[3407]: E0117 00:04:48.777327 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 17 00:04:48.778000 kubelet[3407]: E0117 00:04:48.777476 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:6a0029da1bf5401b94096282928a075e,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:48.782195 containerd[2019]: time="2026-01-17T00:04:48.782136594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 17 00:04:49.069494 containerd[2019]: time="2026-01-17T00:04:49.069417868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:49.071809 containerd[2019]: time="2026-01-17T00:04:49.071698540Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 17 00:04:49.071974 containerd[2019]: time="2026-01-17T00:04:49.071900992Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 17 00:04:49.072313 kubelet[3407]: E0117 00:04:49.072221 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:49.073295 kubelet[3407]: E0117 00:04:49.072326 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 17 00:04:49.073295 kubelet[3407]: E0117 00:04:49.072497 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kwj5s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6886fb9d84-zxzfv_calico-system(82ea16ac-8d68-4d2e-9ce1-f2b920201dc6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:49.074416 kubelet[3407]: E0117 00:04:49.073752 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:04:49.513613 containerd[2019]: time="2026-01-17T00:04:49.513180090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:04:49.782457 containerd[2019]: time="2026-01-17T00:04:49.782182651Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:49.784611 containerd[2019]: time="2026-01-17T00:04:49.784508155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:04:49.784799 containerd[2019]: time="2026-01-17T00:04:49.784705867Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:04:49.785097 kubelet[3407]: E0117 00:04:49.785027 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:04:49.785670 kubelet[3407]: E0117 00:04:49.785105 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:04:49.785982 kubelet[3407]: E0117 00:04:49.785875 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t94sc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-dfrr7_calico-system(2760346a-cdd2-4959-9cca-5bf87123f24a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:49.787380 kubelet[3407]: E0117 00:04:49.787303 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:04:52.511429 containerd[2019]: time="2026-01-17T00:04:52.511060749Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:04:52.811054 containerd[2019]: time="2026-01-17T00:04:52.810972658Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:52.813363 containerd[2019]: time="2026-01-17T00:04:52.813289414Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:04:52.813534 containerd[2019]: time="2026-01-17T00:04:52.813433522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:04:52.813764 kubelet[3407]: E0117 00:04:52.813679 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:52.814564 kubelet[3407]: E0117 00:04:52.813770 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:04:52.814564 kubelet[3407]: E0117 00:04:52.813956 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:52.817590 containerd[2019]: time="2026-01-17T00:04:52.817305022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:04:53.099671 containerd[2019]: time="2026-01-17T00:04:53.099496484Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:04:53.103268 containerd[2019]: time="2026-01-17T00:04:53.102861956Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:04:53.103268 containerd[2019]: time="2026-01-17T00:04:53.102953732Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:04:53.104433 kubelet[3407]: E0117 00:04:53.103213 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:53.105287 kubelet[3407]: E0117 00:04:53.104282 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:04:53.105287 kubelet[3407]: E0117 00:04:53.104712 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vmhd6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jl689_calico-system(0e8ea394-25e8-46d5-8e69-e40f87a471c2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:04:53.106523 kubelet[3407]: E0117 00:04:53.106290 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:04:53.421366 systemd[1]: Started sshd@25-172.31.30.130:22-68.220.241.50:46670.service - OpenSSH per-connection server daemon (68.220.241.50:46670). Jan 17 00:04:53.511987 kubelet[3407]: E0117 00:04:53.511805 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:04:53.956283 sshd[6354]: Accepted publickey for core from 68.220.241.50 port 46670 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:04:53.958915 sshd[6354]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:04:53.974347 systemd-logind[2000]: New session 26 of user core. Jan 17 00:04:53.980874 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 00:04:54.476249 sshd[6354]: pam_unix(sshd:session): session closed for user core Jan 17 00:04:54.487807 systemd[1]: sshd@25-172.31.30.130:22-68.220.241.50:46670.service: Deactivated successfully. Jan 17 00:04:54.493504 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 00:04:54.496357 systemd-logind[2000]: Session 26 logged out. Waiting for processes to exit. Jan 17 00:04:54.504340 systemd-logind[2000]: Removed session 26. Jan 17 00:04:58.511032 kubelet[3407]: E0117 00:04:58.510906 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:04:59.511811 kubelet[3407]: E0117 00:04:59.511716 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:05:02.510680 kubelet[3407]: E0117 00:05:02.510529 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:05:02.511524 kubelet[3407]: E0117 00:05:02.511450 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:05:05.511794 kubelet[3407]: E0117 00:05:05.511741 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:05:07.513826 kubelet[3407]: E0117 00:05:07.513346 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jl689" podUID="0e8ea394-25e8-46d5-8e69-e40f87a471c2" Jan 17 00:05:08.260680 systemd[1]: cri-containerd-dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a.scope: Deactivated successfully. Jan 17 00:05:08.261137 systemd[1]: cri-containerd-dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a.scope: Consumed 28.543s CPU time. Jan 17 00:05:08.306471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a-rootfs.mount: Deactivated successfully. Jan 17 00:05:08.316323 containerd[2019]: time="2026-01-17T00:05:08.316160927Z" level=info msg="shim disconnected" id=dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a namespace=k8s.io Jan 17 00:05:08.316323 containerd[2019]: time="2026-01-17T00:05:08.316280747Z" level=warning msg="cleaning up after shim disconnected" id=dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a namespace=k8s.io Jan 17 00:05:08.317466 containerd[2019]: time="2026-01-17T00:05:08.317072591Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:08.465690 kubelet[3407]: I0117 00:05:08.465064 3407 scope.go:117] "RemoveContainer" containerID="dc7f9831bd5ad49cfc09684ba567619513aff96b77d75fafaa943334e06f169a" Jan 17 00:05:08.472979 containerd[2019]: time="2026-01-17T00:05:08.472890960Z" level=info msg="CreateContainer within sandbox \"12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:05:08.498180 containerd[2019]: time="2026-01-17T00:05:08.497994756Z" level=info msg="CreateContainer within sandbox \"12f7637cbf635fc40a849d283e99c25325f93a3b4143a030eefcff770aca96ad\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b\"" Jan 17 00:05:08.499288 containerd[2019]: time="2026-01-17T00:05:08.498845964Z" level=info msg="StartContainer for \"e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b\"" Jan 17 00:05:08.562534 systemd[1]: Started cri-containerd-e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b.scope - libcontainer container e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b. Jan 17 00:05:08.614970 containerd[2019]: time="2026-01-17T00:05:08.614889037Z" level=info msg="StartContainer for \"e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b\" returns successfully" Jan 17 00:05:09.141683 systemd[1]: cri-containerd-9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6.scope: Deactivated successfully. Jan 17 00:05:09.142164 systemd[1]: cri-containerd-9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6.scope: Consumed 6.617s CPU time, 17.5M memory peak, 0B memory swap peak. Jan 17 00:05:09.195011 containerd[2019]: time="2026-01-17T00:05:09.194896524Z" level=info msg="shim disconnected" id=9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6 namespace=k8s.io Jan 17 00:05:09.195011 containerd[2019]: time="2026-01-17T00:05:09.195006732Z" level=warning msg="cleaning up after shim disconnected" id=9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6 namespace=k8s.io Jan 17 00:05:09.195415 containerd[2019]: time="2026-01-17T00:05:09.195032352Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:09.304063 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6-rootfs.mount: Deactivated successfully. Jan 17 00:05:09.470933 kubelet[3407]: I0117 00:05:09.470804 3407 scope.go:117] "RemoveContainer" containerID="9ae0d9d8486a3c7fcb609f48471cc70af8b99ac0ce11f02bad56fcbe8d7930f6" Jan 17 00:05:09.477123 containerd[2019]: time="2026-01-17T00:05:09.477048709Z" level=info msg="CreateContainer within sandbox \"73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:05:09.504406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079278930.mount: Deactivated successfully. Jan 17 00:05:09.508262 containerd[2019]: time="2026-01-17T00:05:09.508046677Z" level=info msg="CreateContainer within sandbox \"73027666a7608066b1fd3f65192120d91442a0707315155392c45c414c017755\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c4360accf68ff9840bde8c65992e7db2583bf98c2d94b37d6b73a279c3fe51d0\"" Jan 17 00:05:09.508912 containerd[2019]: time="2026-01-17T00:05:09.508786945Z" level=info msg="StartContainer for \"c4360accf68ff9840bde8c65992e7db2583bf98c2d94b37d6b73a279c3fe51d0\"" Jan 17 00:05:09.576585 systemd[1]: Started cri-containerd-c4360accf68ff9840bde8c65992e7db2583bf98c2d94b37d6b73a279c3fe51d0.scope - libcontainer container c4360accf68ff9840bde8c65992e7db2583bf98c2d94b37d6b73a279c3fe51d0. Jan 17 00:05:09.655261 containerd[2019]: time="2026-01-17T00:05:09.653786222Z" level=info msg="StartContainer for \"c4360accf68ff9840bde8c65992e7db2583bf98c2d94b37d6b73a279c3fe51d0\" returns successfully" Jan 17 00:05:11.513966 kubelet[3407]: E0117 00:05:11.513442 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-67f478bb65-pq6fw" podUID="b69cd1ae-f3f7-4ea4-9e97-d8c762b48c31" Jan 17 00:05:12.511033 kubelet[3407]: E0117 00:05:12.510961 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-4smvz" podUID="98e123b4-3ef3-4dbb-b304-2875273a6844" Jan 17 00:05:13.747863 systemd[1]: cri-containerd-009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3.scope: Deactivated successfully. Jan 17 00:05:13.749867 systemd[1]: cri-containerd-009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3.scope: Consumed 4.552s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 17 00:05:13.792896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3-rootfs.mount: Deactivated successfully. Jan 17 00:05:13.809284 containerd[2019]: time="2026-01-17T00:05:13.808946095Z" level=info msg="shim disconnected" id=009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3 namespace=k8s.io Jan 17 00:05:13.810063 containerd[2019]: time="2026-01-17T00:05:13.809384263Z" level=warning msg="cleaning up after shim disconnected" id=009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3 namespace=k8s.io Jan 17 00:05:13.810579 containerd[2019]: time="2026-01-17T00:05:13.809414359Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:05:14.498835 kubelet[3407]: I0117 00:05:14.498768 3407 scope.go:117] "RemoveContainer" containerID="009d4d6691ad50efd778831154708a2f74ee3782e7de2025dec46066a3f3e6f3" Jan 17 00:05:14.503657 containerd[2019]: time="2026-01-17T00:05:14.503568258Z" level=info msg="CreateContainer within sandbox \"e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:05:14.529277 containerd[2019]: time="2026-01-17T00:05:14.529148178Z" level=info msg="CreateContainer within sandbox \"e19ca760b57fedd37780e30cf72e2cd8739a664d7b007bc8d18a4153f435019f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6\"" Jan 17 00:05:14.530950 containerd[2019]: time="2026-01-17T00:05:14.530897994Z" level=info msg="StartContainer for \"650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6\"" Jan 17 00:05:14.600566 systemd[1]: Started cri-containerd-650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6.scope - libcontainer container 650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6. Jan 17 00:05:14.666868 containerd[2019]: time="2026-01-17T00:05:14.666804499Z" level=info msg="StartContainer for \"650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6\" returns successfully" Jan 17 00:05:14.798340 systemd[1]: run-containerd-runc-k8s.io-650e53e249db322fb3c7b54b72975451bc7e652cec5fc5637e6a029fc83ac8b6-runc.qyaGYt.mount: Deactivated successfully. Jan 17 00:05:15.510327 kubelet[3407]: E0117 00:05:15.510155 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-dfrr7" podUID="2760346a-cdd2-4959-9cca-5bf87123f24a" Jan 17 00:05:17.223764 kubelet[3407]: E0117 00:05:17.222712 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.30.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-130?timeout=10s\": context deadline exceeded" Jan 17 00:05:17.511519 kubelet[3407]: E0117 00:05:17.510857 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6886fb9d84-zxzfv" podUID="82ea16ac-8d68-4d2e-9ce1-f2b920201dc6" Jan 17 00:05:18.511009 kubelet[3407]: E0117 00:05:18.510911 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-76869b6969-b9cdk" podUID="cecd0bc0-a5de-49ac-853f-0e0f9c309bd4" Jan 17 00:05:20.091067 systemd[1]: cri-containerd-e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b.scope: Deactivated successfully. Jan 17 00:05:20.133593 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b-rootfs.mount: Deactivated successfully. Jan 17 00:05:20.149760 containerd[2019]: time="2026-01-17T00:05:20.149358994Z" level=info msg="shim disconnected" id=e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b namespace=k8s.io Jan 17 00:05:20.149760 containerd[2019]: time="2026-01-17T00:05:20.149432830Z" level=warning msg="cleaning up after shim disconnected" id=e75ebf15de849b2dd2cc51cb2a3b8749d72c70a92c05ceef2ba3cf1e3cfe3e8b namespace=k8s.io Jan 17 00:05:20.149760 containerd[2019]: time="2026-01-17T00:05:20.149454970Z" level=info msg="cleaning up dead shim" namespace=k8s.io