Jan 23 23:55:15.307122 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:55:15.307170 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:55:15.307196 kernel: KASLR disabled due to lack of seed Jan 23 23:55:15.307213 kernel: efi: EFI v2.7 by EDK II Jan 23 23:55:15.307230 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:55:15.307246 kernel: ACPI: Early table checksum verification disabled Jan 23 23:55:15.307264 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:55:15.307280 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:55:15.307296 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:55:15.307311 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:55:15.307332 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:55:15.307348 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:55:15.307364 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:55:15.307380 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:55:15.307399 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:55:15.307419 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:55:15.307437 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:55:15.307454 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:55:15.307470 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:55:15.307487 kernel: printk: bootconsole [uart0] enabled Jan 23 23:55:15.307504 kernel: NUMA: Failed to initialise from firmware Jan 23 23:55:15.307521 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:15.307537 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:55:15.307554 kernel: Zone ranges: Jan 23 23:55:15.307570 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:55:15.307587 kernel: DMA32 empty Jan 23 23:55:15.307608 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:55:15.307625 kernel: Movable zone start for each node Jan 23 23:55:15.307641 kernel: Early memory node ranges Jan 23 23:55:15.307657 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:55:15.307674 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:55:15.307691 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:55:15.307707 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:55:15.307724 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:55:15.307740 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:55:15.307757 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:55:15.307773 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:55:15.307790 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:15.307864 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:55:15.307896 kernel: psci: probing for conduit method from ACPI. Jan 23 23:55:15.307925 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:55:15.307944 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:55:15.307962 kernel: psci: Trusted OS migration not required Jan 23 23:55:15.308007 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:55:15.308027 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:55:15.308045 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:55:15.308062 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:55:15.308080 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:55:15.308098 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:55:15.308115 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:55:15.308132 kernel: CPU features: detected: Spectre-v2 Jan 23 23:55:15.308150 kernel: CPU features: detected: Spectre-v3a Jan 23 23:55:15.308167 kernel: CPU features: detected: Spectre-BHB Jan 23 23:55:15.308185 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:55:15.308207 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:55:15.308225 kernel: alternatives: applying boot alternatives Jan 23 23:55:15.308245 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:15.308263 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:55:15.308281 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:55:15.308298 kernel: Fallback order for Node 0: 0 Jan 23 23:55:15.308316 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:55:15.308333 kernel: Policy zone: Normal Jan 23 23:55:15.308351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:55:15.308368 kernel: software IO TLB: area num 2. Jan 23 23:55:15.308385 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:55:15.308410 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:55:15.308428 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:55:15.308445 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:55:15.308463 kernel: rcu: RCU event tracing is enabled. Jan 23 23:55:15.308481 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:55:15.308499 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:55:15.308517 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:55:15.308535 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:55:15.308553 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:55:15.308571 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:55:15.308588 kernel: GICv3: 96 SPIs implemented Jan 23 23:55:15.308610 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:55:15.308627 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:55:15.308645 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:55:15.308662 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:55:15.308679 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:55:15.308697 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:55:15.308715 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:55:15.308733 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:55:15.308750 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:55:15.308768 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:55:15.308786 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:55:15.308803 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:55:15.308826 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:55:15.308844 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:55:15.308861 kernel: Console: colour dummy device 80x25 Jan 23 23:55:15.308879 kernel: printk: console [tty1] enabled Jan 23 23:55:15.308898 kernel: ACPI: Core revision 20230628 Jan 23 23:55:15.308916 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:55:15.308934 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:55:15.308952 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:55:15.309012 kernel: landlock: Up and running. Jan 23 23:55:15.309041 kernel: SELinux: Initializing. Jan 23 23:55:15.309060 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:15.309079 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:15.309098 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:15.309117 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:15.309135 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:55:15.309153 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:55:15.309172 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:55:15.309190 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:55:15.309213 kernel: Remapping and enabling EFI services. Jan 23 23:55:15.309231 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:55:15.309249 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:55:15.309267 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:55:15.309285 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:55:15.309303 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:55:15.309320 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:55:15.309338 kernel: SMP: Total of 2 processors activated. Jan 23 23:55:15.309356 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:55:15.309378 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:55:15.309396 kernel: CPU features: detected: CRC32 instructions Jan 23 23:55:15.309414 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:55:15.309443 kernel: alternatives: applying system-wide alternatives Jan 23 23:55:15.309466 kernel: devtmpfs: initialized Jan 23 23:55:15.309485 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:55:15.309503 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:55:15.309522 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:55:15.309541 kernel: SMBIOS 3.0.0 present. Jan 23 23:55:15.309564 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:55:15.309582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:55:15.309601 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:55:15.309620 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:55:15.309639 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:55:15.309658 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:55:15.309676 kernel: audit: type=2000 audit(0.292:1): state=initialized audit_enabled=0 res=1 Jan 23 23:55:15.309695 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:55:15.309718 kernel: cpuidle: using governor menu Jan 23 23:55:15.309736 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:55:15.309755 kernel: ASID allocator initialised with 65536 entries Jan 23 23:55:15.309774 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:55:15.309793 kernel: Serial: AMBA PL011 UART driver Jan 23 23:55:15.309811 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:55:15.309830 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:55:15.309848 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:55:15.309867 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:55:15.309890 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:55:15.309909 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:55:15.309928 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:55:15.309947 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:55:15.311032 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:55:15.311070 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:55:15.311089 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:55:15.311108 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:55:15.311127 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:55:15.311155 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:55:15.311175 kernel: ACPI: Interpreter enabled Jan 23 23:55:15.311193 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:55:15.311212 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:55:15.311230 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:55:15.311543 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:55:15.311760 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:55:15.312021 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:55:15.312321 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:55:15.314331 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:55:15.314375 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:55:15.314396 kernel: acpiphp: Slot [1] registered Jan 23 23:55:15.314416 kernel: acpiphp: Slot [2] registered Jan 23 23:55:15.314435 kernel: acpiphp: Slot [3] registered Jan 23 23:55:15.314454 kernel: acpiphp: Slot [4] registered Jan 23 23:55:15.314473 kernel: acpiphp: Slot [5] registered Jan 23 23:55:15.314501 kernel: acpiphp: Slot [6] registered Jan 23 23:55:15.314520 kernel: acpiphp: Slot [7] registered Jan 23 23:55:15.314539 kernel: acpiphp: Slot [8] registered Jan 23 23:55:15.314557 kernel: acpiphp: Slot [9] registered Jan 23 23:55:15.314576 kernel: acpiphp: Slot [10] registered Jan 23 23:55:15.314594 kernel: acpiphp: Slot [11] registered Jan 23 23:55:15.314613 kernel: acpiphp: Slot [12] registered Jan 23 23:55:15.314631 kernel: acpiphp: Slot [13] registered Jan 23 23:55:15.314650 kernel: acpiphp: Slot [14] registered Jan 23 23:55:15.314668 kernel: acpiphp: Slot [15] registered Jan 23 23:55:15.314692 kernel: acpiphp: Slot [16] registered Jan 23 23:55:15.314711 kernel: acpiphp: Slot [17] registered Jan 23 23:55:15.314729 kernel: acpiphp: Slot [18] registered Jan 23 23:55:15.314747 kernel: acpiphp: Slot [19] registered Jan 23 23:55:15.314766 kernel: acpiphp: Slot [20] registered Jan 23 23:55:15.314785 kernel: acpiphp: Slot [21] registered Jan 23 23:55:15.314803 kernel: acpiphp: Slot [22] registered Jan 23 23:55:15.314822 kernel: acpiphp: Slot [23] registered Jan 23 23:55:15.314840 kernel: acpiphp: Slot [24] registered Jan 23 23:55:15.314863 kernel: acpiphp: Slot [25] registered Jan 23 23:55:15.314882 kernel: acpiphp: Slot [26] registered Jan 23 23:55:15.314900 kernel: acpiphp: Slot [27] registered Jan 23 23:55:15.314919 kernel: acpiphp: Slot [28] registered Jan 23 23:55:15.314937 kernel: acpiphp: Slot [29] registered Jan 23 23:55:15.314956 kernel: acpiphp: Slot [30] registered Jan 23 23:55:15.315013 kernel: acpiphp: Slot [31] registered Jan 23 23:55:15.315053 kernel: PCI host bridge to bus 0000:00 Jan 23 23:55:15.315279 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:55:15.315483 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:55:15.315674 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:15.315893 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:55:15.318374 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:55:15.318627 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:55:15.318838 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:55:15.319101 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:55:15.319319 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:55:15.319530 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:15.325496 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:55:15.328426 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:55:15.328648 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:55:15.328853 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:55:15.330192 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:15.330412 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:55:15.330596 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:55:15.330781 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:15.330810 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:55:15.330830 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:55:15.330849 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:55:15.330868 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:55:15.330897 kernel: iommu: Default domain type: Translated Jan 23 23:55:15.330916 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:55:15.330936 kernel: efivars: Registered efivars operations Jan 23 23:55:15.330955 kernel: vgaarb: loaded Jan 23 23:55:15.332093 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:55:15.332121 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:55:15.332141 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:55:15.332161 kernel: pnp: PnP ACPI init Jan 23 23:55:15.332411 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:55:15.332451 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:55:15.332470 kernel: NET: Registered PF_INET protocol family Jan 23 23:55:15.332490 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:55:15.332509 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:55:15.332529 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:55:15.332548 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:55:15.332567 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:55:15.332586 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:55:15.332611 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:15.332631 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:15.332650 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:55:15.332670 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:55:15.332691 kernel: kvm [1]: HYP mode not available Jan 23 23:55:15.332711 kernel: Initialise system trusted keyrings Jan 23 23:55:15.332731 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:55:15.332752 kernel: Key type asymmetric registered Jan 23 23:55:15.332771 kernel: Asymmetric key parser 'x509' registered Jan 23 23:55:15.332797 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:55:15.332818 kernel: io scheduler mq-deadline registered Jan 23 23:55:15.332838 kernel: io scheduler kyber registered Jan 23 23:55:15.332858 kernel: io scheduler bfq registered Jan 23 23:55:15.333192 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:55:15.333235 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:55:15.333256 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:55:15.333275 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:55:15.333296 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:55:15.333328 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:55:15.333350 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:55:15.333591 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:55:15.333621 kernel: printk: console [ttyS0] disabled Jan 23 23:55:15.333641 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:55:15.333661 kernel: printk: console [ttyS0] enabled Jan 23 23:55:15.333680 kernel: printk: bootconsole [uart0] disabled Jan 23 23:55:15.333699 kernel: thunder_xcv, ver 1.0 Jan 23 23:55:15.333719 kernel: thunder_bgx, ver 1.0 Jan 23 23:55:15.333745 kernel: nicpf, ver 1.0 Jan 23 23:55:15.333765 kernel: nicvf, ver 1.0 Jan 23 23:55:15.335589 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:55:15.335862 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:55:14 UTC (1769212514) Jan 23 23:55:15.335892 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:55:15.335912 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:55:15.335932 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:55:15.335951 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:55:15.336107 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:55:15.336127 kernel: Segment Routing with IPv6 Jan 23 23:55:15.336146 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:55:15.336165 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:55:15.336184 kernel: Key type dns_resolver registered Jan 23 23:55:15.336203 kernel: registered taskstats version 1 Jan 23 23:55:15.336222 kernel: Loading compiled-in X.509 certificates Jan 23 23:55:15.336241 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:55:15.336260 kernel: Key type .fscrypt registered Jan 23 23:55:15.336286 kernel: Key type fscrypt-provisioning registered Jan 23 23:55:15.336305 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:55:15.336324 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:55:15.336343 kernel: ima: No architecture policies found Jan 23 23:55:15.336362 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:55:15.336381 kernel: clk: Disabling unused clocks Jan 23 23:55:15.336399 kernel: Freeing unused kernel memory: 39424K Jan 23 23:55:15.336417 kernel: Run /init as init process Jan 23 23:55:15.336436 kernel: with arguments: Jan 23 23:55:15.336460 kernel: /init Jan 23 23:55:15.336479 kernel: with environment: Jan 23 23:55:15.336497 kernel: HOME=/ Jan 23 23:55:15.336516 kernel: TERM=linux Jan 23 23:55:15.336539 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:15.336563 systemd[1]: Detected virtualization amazon. Jan 23 23:55:15.336584 systemd[1]: Detected architecture arm64. Jan 23 23:55:15.336605 systemd[1]: Running in initrd. Jan 23 23:55:15.336630 systemd[1]: No hostname configured, using default hostname. Jan 23 23:55:15.336651 systemd[1]: Hostname set to . Jan 23 23:55:15.336672 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:15.336693 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:55:15.336713 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:15.336734 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:15.336755 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:55:15.336776 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:15.336802 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:55:15.336823 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:55:15.336847 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:55:15.336868 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:55:15.336889 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:15.336909 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:15.336935 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:15.336955 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:15.336999 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:15.337022 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:15.337043 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:15.337064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:15.337085 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:55:15.337105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:55:15.337126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:15.337154 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:15.337175 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:15.337195 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:15.337216 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:55:15.337236 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:15.337257 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:55:15.337277 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:55:15.337297 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:15.337318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:15.337344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:15.337365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:15.337385 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:15.337406 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:55:15.337428 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:15.337494 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:55:15.337539 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:55:15.337558 kernel: Bridge firewalling registered Jan 23 23:55:15.337585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:15.337606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:15.337628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:15.337649 systemd-journald[251]: Journal started Jan 23 23:55:15.337687 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2df1888b6a54750aa210583ffe2276) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:15.266992 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:55:15.315103 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:55:15.369638 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:15.369713 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:15.363014 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:15.371264 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:15.385274 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:15.421986 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:15.427743 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:15.444625 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:15.459305 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:15.467025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:15.480480 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:55:15.521413 dracut-cmdline[289]: dracut-dracut-053 Jan 23 23:55:15.530118 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:15.565999 systemd-resolved[286]: Positive Trust Anchors: Jan 23 23:55:15.567419 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:15.569389 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:15.696024 kernel: SCSI subsystem initialized Jan 23 23:55:15.704012 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:55:15.717001 kernel: iscsi: registered transport (tcp) Jan 23 23:55:15.740211 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:55:15.740287 kernel: QLogic iSCSI HBA Driver Jan 23 23:55:15.810003 kernel: random: crng init done Jan 23 23:55:15.809583 systemd-resolved[286]: Defaulting to hostname 'linux'. Jan 23 23:55:15.814066 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:15.820151 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:15.849894 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:15.863376 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:55:15.900023 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:55:15.900100 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:55:15.900142 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:55:15.969023 kernel: raid6: neonx8 gen() 6675 MB/s Jan 23 23:55:15.986019 kernel: raid6: neonx4 gen() 6506 MB/s Jan 23 23:55:16.003017 kernel: raid6: neonx2 gen() 5437 MB/s Jan 23 23:55:16.020009 kernel: raid6: neonx1 gen() 3938 MB/s Jan 23 23:55:16.037021 kernel: raid6: int64x8 gen() 3815 MB/s Jan 23 23:55:16.054021 kernel: raid6: int64x4 gen() 3701 MB/s Jan 23 23:55:16.071024 kernel: raid6: int64x2 gen() 3597 MB/s Jan 23 23:55:16.089226 kernel: raid6: int64x1 gen() 2752 MB/s Jan 23 23:55:16.089312 kernel: raid6: using algorithm neonx8 gen() 6675 MB/s Jan 23 23:55:16.108173 kernel: raid6: .... xor() 4851 MB/s, rmw enabled Jan 23 23:55:16.108271 kernel: raid6: using neon recovery algorithm Jan 23 23:55:16.117032 kernel: xor: measuring software checksum speed Jan 23 23:55:16.117113 kernel: 8regs : 9991 MB/sec Jan 23 23:55:16.119373 kernel: 32regs : 11943 MB/sec Jan 23 23:55:16.120749 kernel: arm64_neon : 9493 MB/sec Jan 23 23:55:16.120809 kernel: xor: using function: 32regs (11943 MB/sec) Jan 23 23:55:16.210028 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:55:16.233268 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:16.247272 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:16.288206 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 23 23:55:16.296817 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:16.311276 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:55:16.352030 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 23 23:55:16.417688 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:16.430528 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:16.564558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:16.578291 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:55:16.635181 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:16.644526 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:16.651954 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:16.655735 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:16.674363 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:55:16.724469 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:16.773651 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:55:16.773728 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:55:16.783787 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:16.783933 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:16.793991 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:55:16.794306 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:55:16.795720 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:16.798548 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:16.801954 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:16.821058 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:4e:a5:b7:24:89 Jan 23 23:55:16.810403 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:16.829326 (udev-worker)[514]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:16.830073 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:16.848424 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:55:16.848471 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:55:16.861055 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:55:16.871622 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:55:16.871702 kernel: GPT:9289727 != 33554431 Jan 23 23:55:16.873204 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:55:16.874140 kernel: GPT:9289727 != 33554431 Jan 23 23:55:16.874188 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:55:16.874217 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:16.892522 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:16.903323 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:16.962771 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:17.000094 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (527) Jan 23 23:55:17.019027 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (515) Jan 23 23:55:17.117883 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:55:17.138401 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:55:17.179309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:17.193546 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:17.200692 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:17.220266 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:55:17.232406 disk-uuid[660]: Primary Header is updated. Jan 23 23:55:17.232406 disk-uuid[660]: Secondary Entries is updated. Jan 23 23:55:17.232406 disk-uuid[660]: Secondary Header is updated. Jan 23 23:55:17.245005 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:17.251050 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:17.260025 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:18.262024 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:18.264923 disk-uuid[661]: The operation has completed successfully. Jan 23 23:55:18.473876 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:55:18.474235 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:55:18.534326 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:55:18.543561 sh[1006]: Success Jan 23 23:55:18.572538 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:55:18.678528 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:55:18.692280 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:55:18.714234 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:55:18.739673 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:55:18.739765 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:18.743038 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:55:18.743120 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:55:18.743473 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:55:18.856025 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:55:18.869271 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:55:18.874198 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:55:18.895305 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:55:18.904333 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:55:18.933287 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:18.933381 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:18.933412 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:18.950033 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:18.972813 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:55:18.977602 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:18.991171 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:55:19.002324 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:55:19.142118 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:19.154293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:19.227057 systemd-networkd[1206]: lo: Link UP Jan 23 23:55:19.227074 systemd-networkd[1206]: lo: Gained carrier Jan 23 23:55:19.232918 systemd-networkd[1206]: Enumeration completed Jan 23 23:55:19.234823 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:19.234831 systemd-networkd[1206]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:19.237563 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:19.244570 systemd-networkd[1206]: eth0: Link UP Jan 23 23:55:19.244611 systemd-networkd[1206]: eth0: Gained carrier Jan 23 23:55:19.244634 systemd-networkd[1206]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:19.260601 systemd[1]: Reached target network.target - Network. Jan 23 23:55:19.288177 systemd-networkd[1206]: eth0: DHCPv4 address 172.31.20.253/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:19.500897 ignition[1111]: Ignition 2.19.0 Jan 23 23:55:19.500932 ignition[1111]: Stage: fetch-offline Jan 23 23:55:19.505942 ignition[1111]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:19.506043 ignition[1111]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:19.508816 ignition[1111]: Ignition finished successfully Jan 23 23:55:19.515062 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:19.525457 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:55:19.561538 ignition[1216]: Ignition 2.19.0 Jan 23 23:55:19.561623 ignition[1216]: Stage: fetch Jan 23 23:55:19.563912 ignition[1216]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:19.563945 ignition[1216]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:19.565397 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:19.578591 ignition[1216]: PUT result: OK Jan 23 23:55:19.582886 ignition[1216]: parsed url from cmdline: "" Jan 23 23:55:19.582906 ignition[1216]: no config URL provided Jan 23 23:55:19.582926 ignition[1216]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:55:19.582958 ignition[1216]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:55:19.583038 ignition[1216]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:19.588014 ignition[1216]: PUT result: OK Jan 23 23:55:19.588129 ignition[1216]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:55:19.590755 ignition[1216]: GET result: OK Jan 23 23:55:19.590957 ignition[1216]: parsing config with SHA512: 38f33e6d37896431c5b2f65cf49a96f7d78986b5d60d7e734ccbc47bf1c23bc0cf0d3148caf0a9318da60aeab4df8abaf871c03d9f548b7f2609fa2a60735586 Jan 23 23:55:19.612936 unknown[1216]: fetched base config from "system" Jan 23 23:55:19.613014 unknown[1216]: fetched base config from "system" Jan 23 23:55:19.615749 ignition[1216]: fetch: fetch complete Jan 23 23:55:19.613039 unknown[1216]: fetched user config from "aws" Jan 23 23:55:19.615765 ignition[1216]: fetch: fetch passed Jan 23 23:55:19.615919 ignition[1216]: Ignition finished successfully Jan 23 23:55:19.628932 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:55:19.641313 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:55:19.686858 ignition[1222]: Ignition 2.19.0 Jan 23 23:55:19.689051 ignition[1222]: Stage: kargs Jan 23 23:55:19.689828 ignition[1222]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:19.689859 ignition[1222]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:19.691609 ignition[1222]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:19.700279 ignition[1222]: PUT result: OK Jan 23 23:55:19.706487 ignition[1222]: kargs: kargs passed Jan 23 23:55:19.706702 ignition[1222]: Ignition finished successfully Jan 23 23:55:19.712083 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:55:19.724349 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:55:19.761722 ignition[1228]: Ignition 2.19.0 Jan 23 23:55:19.761761 ignition[1228]: Stage: disks Jan 23 23:55:19.763271 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:19.763301 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:19.763469 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:19.765702 ignition[1228]: PUT result: OK Jan 23 23:55:19.778472 ignition[1228]: disks: disks passed Jan 23 23:55:19.778636 ignition[1228]: Ignition finished successfully Jan 23 23:55:19.784441 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:55:19.791419 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:19.803500 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:55:19.806524 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:19.811601 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:19.822062 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:19.835394 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:55:19.882294 systemd-fsck[1236]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:55:19.888052 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:55:19.902363 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:55:20.009022 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:55:20.011919 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:55:20.016783 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:20.034236 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:20.047365 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:55:20.054497 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:55:20.054631 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:55:20.054693 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:20.083533 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:55:20.087689 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1255) Jan 23 23:55:20.096046 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:20.096140 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:20.096171 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:20.100433 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:55:20.110018 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:20.113488 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:20.375445 initrd-setup-root[1279]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:55:20.399627 initrd-setup-root[1286]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:55:20.409657 initrd-setup-root[1293]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:55:20.419591 initrd-setup-root[1300]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:55:20.764004 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:20.776355 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:55:20.783333 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:55:20.806761 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:55:20.809648 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:20.860090 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:55:20.868639 ignition[1368]: INFO : Ignition 2.19.0 Jan 23 23:55:20.868639 ignition[1368]: INFO : Stage: mount Jan 23 23:55:20.872895 ignition[1368]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:20.872895 ignition[1368]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:20.872895 ignition[1368]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:20.881347 ignition[1368]: INFO : PUT result: OK Jan 23 23:55:20.887847 ignition[1368]: INFO : mount: mount passed Jan 23 23:55:20.890189 ignition[1368]: INFO : Ignition finished successfully Jan 23 23:55:20.896073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:55:20.905137 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:55:21.020470 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:21.051013 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1379) Jan 23 23:55:21.055098 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:21.055144 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:21.055173 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:21.063018 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:21.065096 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:21.084175 systemd-networkd[1206]: eth0: Gained IPv6LL Jan 23 23:55:21.108591 ignition[1396]: INFO : Ignition 2.19.0 Jan 23 23:55:21.108591 ignition[1396]: INFO : Stage: files Jan 23 23:55:21.112777 ignition[1396]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:21.112777 ignition[1396]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:21.112777 ignition[1396]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:21.121010 ignition[1396]: INFO : PUT result: OK Jan 23 23:55:21.127061 ignition[1396]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:55:21.131034 ignition[1396]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:55:21.131034 ignition[1396]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:55:21.183179 ignition[1396]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:55:21.186920 ignition[1396]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:55:21.190878 unknown[1396]: wrote ssh authorized keys file for user: core Jan 23 23:55:21.194259 ignition[1396]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:55:21.198793 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:55:21.203351 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:21.288781 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:55:21.483561 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:55:21.483561 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:21.494626 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:55:21.934685 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:55:22.374560 ignition[1396]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:55:22.374560 ignition[1396]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:22.382701 ignition[1396]: INFO : files: files passed Jan 23 23:55:22.382701 ignition[1396]: INFO : Ignition finished successfully Jan 23 23:55:22.414052 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:55:22.425306 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:55:22.438407 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:55:22.452407 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:55:22.452603 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:55:22.472875 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:22.477395 initrd-setup-root-after-ignition[1424]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:22.477395 initrd-setup-root-after-ignition[1424]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:22.491250 initrd-setup-root-after-ignition[1428]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:22.481171 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:55:22.504389 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:55:22.585872 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:55:22.586459 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:55:22.595429 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:55:22.597947 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:55:22.603110 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:55:22.612419 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:55:22.656481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:22.673859 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:55:22.698874 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:22.702885 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:22.708246 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:55:22.708755 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:55:22.709008 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:22.709876 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:55:22.710650 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:55:22.711082 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:55:22.711410 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:22.711919 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:22.717473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:55:22.717842 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:22.718604 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:55:22.719003 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:55:22.719347 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:55:22.719657 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:55:22.721521 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:22.724096 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:22.724482 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:22.724775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:55:22.743336 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:22.743611 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:55:22.743890 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:22.749848 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:55:22.750118 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:22.754359 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:55:22.754588 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:55:22.806379 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:55:22.847033 ignition[1449]: INFO : Ignition 2.19.0 Jan 23 23:55:22.847033 ignition[1449]: INFO : Stage: umount Jan 23 23:55:22.846946 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:55:22.853898 ignition[1449]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:22.853898 ignition[1449]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:22.853898 ignition[1449]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:22.853898 ignition[1449]: INFO : PUT result: OK Jan 23 23:55:22.850499 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:55:22.852534 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:22.874911 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:55:22.875201 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:22.889955 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:55:22.893267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:55:22.902455 ignition[1449]: INFO : umount: umount passed Jan 23 23:55:22.906043 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:55:22.909431 ignition[1449]: INFO : Ignition finished successfully Jan 23 23:55:22.914764 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:55:22.915320 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:55:22.925532 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:55:22.925917 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:55:22.935365 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:55:22.935572 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:55:22.938696 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:55:22.938828 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:55:22.943842 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:55:22.943960 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:55:22.948366 systemd[1]: Stopped target network.target - Network. Jan 23 23:55:22.952519 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:55:22.953234 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:22.957519 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:55:22.961933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:55:22.969925 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:22.973345 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:55:22.975593 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:55:22.979068 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:55:22.979178 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:22.983624 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:55:22.983721 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:22.987473 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:55:22.987598 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:55:23.016633 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:55:23.016777 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:23.019455 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:55:23.019568 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:23.023229 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:55:23.028281 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:55:23.042068 systemd-networkd[1206]: eth0: DHCPv6 lease lost Jan 23 23:55:23.052306 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:55:23.052834 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:55:23.065760 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:55:23.068436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:55:23.075498 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:55:23.077922 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:23.089263 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:55:23.092532 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:55:23.092682 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:23.105573 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:55:23.105897 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:23.113420 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:55:23.113541 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:23.116382 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:55:23.116499 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:23.120547 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:23.155500 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:55:23.156258 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:23.168921 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:55:23.169201 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:23.172032 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:55:23.172104 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:23.174626 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:55:23.174930 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:23.191197 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:55:23.191303 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:23.195210 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:23.195314 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:23.201355 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:55:23.216084 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:55:23.219258 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:23.225948 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:55:23.226120 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:23.229279 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:55:23.229393 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:23.233583 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:23.233701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:23.250657 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:55:23.250867 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:55:23.285643 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:55:23.288096 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:55:23.295251 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:55:23.306311 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:55:23.329387 systemd[1]: Switching root. Jan 23 23:55:23.393197 systemd-journald[251]: Journal stopped Jan 23 23:55:25.923320 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:55:25.923471 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:55:25.923520 kernel: SELinux: policy capability open_perms=1 Jan 23 23:55:25.923554 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:55:25.923584 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:55:25.923615 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:55:25.923647 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:55:25.923685 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:55:25.923726 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:55:25.923757 kernel: audit: type=1403 audit(1769212523.943:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:55:25.923869 systemd[1]: Successfully loaded SELinux policy in 63.515ms. Jan 23 23:55:25.923916 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.471ms. Jan 23 23:55:25.923954 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:25.928168 systemd[1]: Detected virtualization amazon. Jan 23 23:55:25.928219 systemd[1]: Detected architecture arm64. Jan 23 23:55:25.928265 systemd[1]: Detected first boot. Jan 23 23:55:25.928301 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:25.928334 zram_generator::config[1491]: No configuration found. Jan 23 23:55:25.928374 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:55:25.928409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:55:25.928439 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:55:25.928471 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:25.928506 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:55:25.928541 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:55:25.928581 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:55:25.928614 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:55:25.928648 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:55:25.928682 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:55:25.928713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:55:25.928758 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:55:25.928789 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:25.928824 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:25.928863 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:55:25.928897 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:55:25.928934 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:55:25.928994 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:25.929037 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:55:25.929073 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:25.929111 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:55:25.929156 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:55:25.929191 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:25.929230 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:55:25.929263 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:25.929296 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:25.929329 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:25.929363 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:25.929393 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:55:25.929423 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:55:25.929455 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:25.929491 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:25.929526 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:25.929559 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:55:25.929592 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:55:25.929622 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:55:25.929656 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:55:25.929694 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:55:25.929728 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:55:25.929759 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:55:25.929797 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:55:25.929832 systemd[1]: Reached target machines.target - Containers. Jan 23 23:55:25.929862 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:55:25.929893 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:25.929924 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:25.929955 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:55:25.938143 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:25.938189 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:25.938232 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:25.938275 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:55:25.938310 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:25.938345 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:55:25.938376 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:55:25.938407 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:55:25.938446 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:55:25.938480 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:55:25.938514 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:25.938551 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:25.938585 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:55:25.938615 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:55:25.938650 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:25.938684 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:55:25.938718 systemd[1]: Stopped verity-setup.service. Jan 23 23:55:25.938749 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:55:25.938784 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:55:25.938817 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:55:25.938857 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:55:25.938889 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:55:25.938923 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:55:25.938956 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:25.943886 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:55:25.943944 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:55:25.944021 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:25.944058 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:25.944090 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:25.944121 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:25.944151 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:55:25.944183 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:55:25.944212 kernel: fuse: init (API version 7.39) Jan 23 23:55:25.944250 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:55:25.944280 kernel: loop: module loaded Jan 23 23:55:25.944313 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:55:25.944344 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:55:25.944374 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:25.944463 systemd-journald[1576]: Collecting audit messages is disabled. Jan 23 23:55:25.944523 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:55:25.944576 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:55:25.944609 kernel: ACPI: bus type drm_connector registered Jan 23 23:55:25.944638 systemd-journald[1576]: Journal started Jan 23 23:55:25.944689 systemd-journald[1576]: Runtime Journal (/run/log/journal/ec2df1888b6a54750aa210583ffe2276) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:25.213880 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:55:25.958022 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:55:25.958079 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:25.270589 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:55:25.271566 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:55:25.984031 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:55:25.984129 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:26.000867 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:55:26.027108 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:55:26.039606 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:26.053046 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:26.055084 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:55:26.058856 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:26.059229 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:26.063888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:55:26.064250 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:55:26.067438 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:26.067732 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:26.072868 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:26.076120 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:55:26.080906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:55:26.109116 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:55:26.136209 kernel: loop0: detected capacity change from 0 to 52536 Jan 23 23:55:26.143171 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:55:26.153356 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:55:26.181934 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:55:26.194312 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:55:26.197482 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:26.221294 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:26.230093 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:55:26.233533 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 23 23:55:26.233559 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Jan 23 23:55:26.254895 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:55:26.258355 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:26.265610 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:55:26.268267 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:55:26.286781 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:55:26.307033 systemd-journald[1576]: Time spent on flushing to /var/log/journal/ec2df1888b6a54750aa210583ffe2276 is 38.708ms for 912 entries. Jan 23 23:55:26.307033 systemd-journald[1576]: System Journal (/var/log/journal/ec2df1888b6a54750aa210583ffe2276) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:55:26.370575 systemd-journald[1576]: Received client request to flush runtime journal. Jan 23 23:55:26.370685 kernel: loop1: detected capacity change from 0 to 207008 Jan 23 23:55:26.316089 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:26.331594 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:55:26.379818 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:55:26.406376 udevadm[1636]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 23 23:55:26.436710 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:26.440112 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:55:26.446726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:55:26.460540 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:26.507920 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Jan 23 23:55:26.507962 systemd-tmpfiles[1645]: ACLs are not supported, ignoring. Jan 23 23:55:26.519085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:26.602018 kernel: loop3: detected capacity change from 0 to 114432 Jan 23 23:55:26.729146 kernel: loop4: detected capacity change from 0 to 52536 Jan 23 23:55:26.748049 kernel: loop5: detected capacity change from 0 to 207008 Jan 23 23:55:26.780039 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:55:26.798041 kernel: loop7: detected capacity change from 0 to 114432 Jan 23 23:55:26.810658 (sd-merge)[1650]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:55:26.815364 (sd-merge)[1650]: Merged extensions into '/usr'. Jan 23 23:55:26.823537 systemd[1]: Reloading requested from client PID 1600 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:55:26.823580 systemd[1]: Reloading... Jan 23 23:55:26.989358 zram_generator::config[1673]: No configuration found. Jan 23 23:55:27.368739 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:27.530648 systemd[1]: Reloading finished in 706 ms. Jan 23 23:55:27.574343 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:55:27.578733 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:55:27.596546 systemd[1]: Starting ensure-sysext.service... Jan 23 23:55:27.604322 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:27.614351 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:27.639174 systemd[1]: Reloading requested from client PID 1728 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:55:27.639212 systemd[1]: Reloading... Jan 23 23:55:27.680865 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:55:27.682682 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:55:27.688508 systemd-tmpfiles[1729]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:55:27.689166 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Jan 23 23:55:27.689322 systemd-tmpfiles[1729]: ACLs are not supported, ignoring. Jan 23 23:55:27.705872 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:27.705906 systemd-tmpfiles[1729]: Skipping /boot Jan 23 23:55:27.756578 systemd-udevd[1730]: Using default interface naming scheme 'v255'. Jan 23 23:55:27.763671 systemd-tmpfiles[1729]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:27.763706 systemd-tmpfiles[1729]: Skipping /boot Jan 23 23:55:27.816202 ldconfig[1597]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:55:27.834015 zram_generator::config[1754]: No configuration found. Jan 23 23:55:28.116233 (udev-worker)[1760]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:28.346172 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:28.511021 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1779) Jan 23 23:55:28.546939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:55:28.548517 systemd[1]: Reloading finished in 908 ms. Jan 23 23:55:28.581093 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:28.585564 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:55:28.589707 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:28.667443 systemd[1]: Finished ensure-sysext.service. Jan 23 23:55:28.721758 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:55:28.736791 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:28.749313 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:28.762304 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:55:28.767461 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:28.776804 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:55:28.790243 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:28.806422 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:28.812318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:28.820339 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:28.823312 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:28.826341 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:55:28.834361 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:55:28.855531 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:28.876130 lvm[1929]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:28.884448 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:28.887081 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:55:28.897761 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:55:28.910374 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:28.926342 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:55:28.932281 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:28.933641 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:28.944907 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:28.947080 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:28.976826 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:28.977328 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:28.981240 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:29.004580 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:55:29.018794 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:55:29.052928 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:29.054305 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:29.057550 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:29.075287 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:55:29.078796 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:29.095487 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:55:29.112904 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:55:29.125354 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:55:29.130690 augenrules[1966]: No rules Jan 23 23:55:29.138685 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:29.154622 lvm[1963]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:29.163189 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:55:29.165000 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:55:29.202030 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:55:29.230111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:55:29.267199 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:55:29.331176 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:29.374766 systemd-networkd[1942]: lo: Link UP Jan 23 23:55:29.374789 systemd-networkd[1942]: lo: Gained carrier Jan 23 23:55:29.378292 systemd-networkd[1942]: Enumeration completed Jan 23 23:55:29.378558 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:29.382284 systemd-networkd[1942]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:29.382291 systemd-networkd[1942]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:29.389760 systemd-networkd[1942]: eth0: Link UP Jan 23 23:55:29.390123 systemd-resolved[1943]: Positive Trust Anchors: Jan 23 23:55:29.390467 systemd-resolved[1943]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:29.390580 systemd-resolved[1943]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:29.392283 systemd-networkd[1942]: eth0: Gained carrier Jan 23 23:55:29.392320 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:55:29.392323 systemd-networkd[1942]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:29.414115 systemd-networkd[1942]: eth0: DHCPv4 address 172.31.20.253/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:29.421955 systemd-resolved[1943]: Defaulting to hostname 'linux'. Jan 23 23:55:29.425645 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:29.428532 systemd[1]: Reached target network.target - Network. Jan 23 23:55:29.430759 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:29.433604 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:29.436454 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:55:29.439491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:55:29.443038 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:55:29.446098 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:55:29.449546 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:55:29.452508 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:55:29.452572 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:29.454742 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:29.459205 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:55:29.465199 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:55:29.479175 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:55:29.483006 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:55:29.486109 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:29.488729 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:29.491386 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:29.491473 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:29.498263 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:55:29.513332 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:55:29.520502 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:55:29.528260 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:55:29.535437 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:55:29.538202 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:55:29.556539 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:55:29.569538 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:55:29.578250 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:55:29.583088 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:55:29.596379 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:55:29.610306 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:55:29.626280 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:55:29.629532 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:55:29.632496 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:55:29.637405 jq[1993]: false Jan 23 23:55:29.638204 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:55:29.649270 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:55:29.657803 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:55:29.658891 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:55:29.669133 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:55:29.672129 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:55:29.721109 extend-filesystems[1994]: Found loop4 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found loop5 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found loop6 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found loop7 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p1 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p2 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p3 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found usr Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p4 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p6 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p7 Jan 23 23:55:29.721109 extend-filesystems[1994]: Found nvme0n1p9 Jan 23 23:55:29.721109 extend-filesystems[1994]: Checking size of /dev/nvme0n1p9 Jan 23 23:55:29.777625 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:55:29.777216 dbus-daemon[1992]: [system] SELinux support is enabled Jan 23 23:55:29.787237 dbus-daemon[1992]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1942 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:29.807484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:55:29.807548 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:55:29.811250 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:55:29.811299 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:55:29.824430 dbus-daemon[1992]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:55:29.833180 extend-filesystems[1994]: Resized partition /dev/nvme0n1p9 Jan 23 23:55:29.866719 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:55:29.867721 extend-filesystems[2025]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:55:29.870343 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:55:29.873205 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:55:29.896462 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:55:29.896560 jq[2006]: true Jan 23 23:55:29.970760 ntpd[1996]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:29.970840 ntpd[1996]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: ---------------------------------------------------- Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: corporation. Support and training for ntp-4 are Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: available at https://www.nwtime.org/support Jan 23 23:55:29.971460 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: ---------------------------------------------------- Jan 23 23:55:29.970865 ntpd[1996]: ---------------------------------------------------- Jan 23 23:55:29.970886 ntpd[1996]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:29.970907 ntpd[1996]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:29.970932 ntpd[1996]: corporation. Support and training for ntp-4 are Jan 23 23:55:29.970953 ntpd[1996]: available at https://www.nwtime.org/support Jan 23 23:55:29.971006 ntpd[1996]: ---------------------------------------------------- Jan 23 23:55:29.993155 ntpd[1996]: proto: precision = 0.108 usec (-23) Jan 23 23:55:30.002164 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: proto: precision = 0.108 usec (-23) Jan 23 23:55:30.002164 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: basedate set to 2026-01-11 Jan 23 23:55:30.002164 ntpd[1996]: 23 Jan 23:55:29 ntpd[1996]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:29.996087 ntpd[1996]: basedate set to 2026-01-11 Jan 23 23:55:29.996142 ntpd[1996]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:30.006750 (ntainerd)[2029]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:55:30.022335 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:30.022335 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:30.022455 tar[2028]: linux-arm64/LICENSE Jan 23 23:55:30.018005 ntpd[1996]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:30.018099 ntpd[1996]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:30.025745 ntpd[1996]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:30.034161 tar[2028]: linux-arm64/helm Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listen normally on 3 eth0 172.31.20.253:123 Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: bind(21) AF_INET6 fe80::44e:a5ff:feb7:2489%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: unable to create socket on eth0 (5) for fe80::44e:a5ff:feb7:2489%2#123 Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: failed to init interface for address fe80::44e:a5ff:feb7:2489%2 Jan 23 23:55:30.034227 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:30.025836 ntpd[1996]: Listen normally on 3 eth0 172.31.20.253:123 Jan 23 23:55:30.025912 ntpd[1996]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:30.026037 ntpd[1996]: bind(21) AF_INET6 fe80::44e:a5ff:feb7:2489%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:30.026080 ntpd[1996]: unable to create socket on eth0 (5) for fe80::44e:a5ff:feb7:2489%2#123 Jan 23 23:55:30.026110 ntpd[1996]: failed to init interface for address fe80::44e:a5ff:feb7:2489%2 Jan 23 23:55:30.026178 ntpd[1996]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:30.056373 systemd-logind[2003]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:55:30.056430 systemd-logind[2003]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:55:30.057904 systemd-logind[2003]: New seat seat0. Jan 23 23:55:30.065074 ntpd[1996]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:30.066872 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:30.067890 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:55:30.077855 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:55:30.103082 jq[2034]: true Jan 23 23:55:30.080065 ntpd[1996]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:30.103437 ntpd[1996]: 23 Jan 23:55:30 ntpd[1996]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:30.106387 extend-filesystems[2025]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:55:30.106387 extend-filesystems[2025]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:55:30.106387 extend-filesystems[2025]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:55:30.118537 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:55:30.129111 extend-filesystems[1994]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:55:30.120659 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:55:30.156036 update_engine[2004]: I20260123 23:55:30.135910 2004 main.cc:92] Flatcar Update Engine starting Jan 23 23:55:30.143532 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:55:30.178601 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:55:30.193228 update_engine[2004]: I20260123 23:55:30.184301 2004 update_check_scheduler.cc:74] Next update check in 5m2s Jan 23 23:55:30.197682 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:55:30.290469 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1767) Jan 23 23:55:30.388462 coreos-metadata[1991]: Jan 23 23:55:30.380 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:30.388462 coreos-metadata[1991]: Jan 23 23:55:30.386 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:55:30.397734 coreos-metadata[1991]: Jan 23 23:55:30.392 INFO Fetch successful Jan 23 23:55:30.397734 coreos-metadata[1991]: Jan 23 23:55:30.392 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:55:30.398001 bash[2085]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:30.399036 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:55:30.403508 coreos-metadata[1991]: Jan 23 23:55:30.399 INFO Fetch successful Jan 23 23:55:30.403508 coreos-metadata[1991]: Jan 23 23:55:30.399 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:55:30.408633 coreos-metadata[1991]: Jan 23 23:55:30.407 INFO Fetch successful Jan 23 23:55:30.408633 coreos-metadata[1991]: Jan 23 23:55:30.407 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:55:30.418398 coreos-metadata[1991]: Jan 23 23:55:30.418 INFO Fetch successful Jan 23 23:55:30.418398 coreos-metadata[1991]: Jan 23 23:55:30.418 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:55:30.430338 coreos-metadata[1991]: Jan 23 23:55:30.430 INFO Fetch failed with 404: resource not found Jan 23 23:55:30.430338 coreos-metadata[1991]: Jan 23 23:55:30.430 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:55:30.436593 coreos-metadata[1991]: Jan 23 23:55:30.434 INFO Fetch successful Jan 23 23:55:30.436593 coreos-metadata[1991]: Jan 23 23:55:30.434 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:55:30.440189 coreos-metadata[1991]: Jan 23 23:55:30.438 INFO Fetch successful Jan 23 23:55:30.440189 coreos-metadata[1991]: Jan 23 23:55:30.438 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:55:30.441462 coreos-metadata[1991]: Jan 23 23:55:30.441 INFO Fetch successful Jan 23 23:55:30.441462 coreos-metadata[1991]: Jan 23 23:55:30.441 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:55:30.448052 coreos-metadata[1991]: Jan 23 23:55:30.447 INFO Fetch successful Jan 23 23:55:30.448052 coreos-metadata[1991]: Jan 23 23:55:30.447 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:55:30.451030 coreos-metadata[1991]: Jan 23 23:55:30.449 INFO Fetch successful Jan 23 23:55:30.486117 systemd[1]: Starting sshkeys.service... Jan 23 23:55:30.650064 containerd[2029]: time="2026-01-23T23:55:30.638021952Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:55:30.639167 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:55:30.651728 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:55:30.749512 systemd-networkd[1942]: eth0: Gained IPv6LL Jan 23 23:55:30.757900 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:55:30.757640 dbus-daemon[1992]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:55:30.766848 dbus-daemon[1992]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2027 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:30.800462 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:55:30.818062 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:55:30.836863 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:55:30.864013 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:55:30.877188 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:30.902544 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:55:30.905287 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:55:30.911701 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:55:30.935613 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:55:30.949349 locksmithd[2055]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:55:31.039008 amazon-ssm-agent[2159]: Initializing new seelog logger Jan 23 23:55:31.039008 amazon-ssm-agent[2159]: New Seelog Logger Creation Complete Jan 23 23:55:31.039008 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.039008 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.039008 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 processing appconfig overrides Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 processing appconfig overrides Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.045210 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 processing appconfig overrides Jan 23 23:55:31.048151 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO Proxy environment variables: Jan 23 23:55:31.053295 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.053516 amazon-ssm-agent[2159]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:31.054491 amazon-ssm-agent[2159]: 2026/01/23 23:55:31 processing appconfig overrides Jan 23 23:55:31.058083 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:55:31.087203 containerd[2029]: time="2026-01-23T23:55:31.086504470Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.106409 containerd[2029]: time="2026-01-23T23:55:31.106327186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.109733794Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.109815262Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110255338Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110312842Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110465950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110503762Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110866294Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110910154Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.115012 containerd[2029]: time="2026-01-23T23:55:31.110941750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.122207830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.122486962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.123051778Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.123313474Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.123355126Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.123591274Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:55:31.124085 containerd[2029]: time="2026-01-23T23:55:31.123705862Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:55:31.126546 polkitd[2168]: Started polkitd version 121 Jan 23 23:55:31.151043 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO https_proxy: Jan 23 23:55:31.155165 containerd[2029]: time="2026-01-23T23:55:31.154445842Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:55:31.155165 containerd[2029]: time="2026-01-23T23:55:31.154568242Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:55:31.155165 containerd[2029]: time="2026-01-23T23:55:31.154612282Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:55:31.155165 containerd[2029]: time="2026-01-23T23:55:31.154649218Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:55:31.155165 containerd[2029]: time="2026-01-23T23:55:31.154690018Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:55:31.162359 containerd[2029]: time="2026-01-23T23:55:31.158119606Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:55:31.164518 containerd[2029]: time="2026-01-23T23:55:31.164379910Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.168158266Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.172950946Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173053570Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173118166Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173176870Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173213194Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173274118Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173312026Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173368342Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173406934Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173462242Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173530318Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173567290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.173960 containerd[2029]: time="2026-01-23T23:55:31.173630278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.174693 containerd[2029]: time="2026-01-23T23:55:31.173711338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.174693 containerd[2029]: time="2026-01-23T23:55:31.173758378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.174693 containerd[2029]: time="2026-01-23T23:55:31.173822170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.179496 containerd[2029]: time="2026-01-23T23:55:31.173855422Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.179496 containerd[2029]: time="2026-01-23T23:55:31.176363578Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.179496 containerd[2029]: time="2026-01-23T23:55:31.176696158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.179942 containerd[2029]: time="2026-01-23T23:55:31.177588706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.180802 coreos-metadata[2129]: Jan 23 23:55:31.180 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:31.181371 containerd[2029]: time="2026-01-23T23:55:31.180140722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.181371 containerd[2029]: time="2026-01-23T23:55:31.180309574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.181371 containerd[2029]: time="2026-01-23T23:55:31.180410338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.181371 containerd[2029]: time="2026-01-23T23:55:31.180585466Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.180664186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.181871098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.182221738Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.186914590Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187023310Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187058638Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187089934Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187116634Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187155958Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187184146Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:55:31.187523 containerd[2029]: time="2026-01-23T23:55:31.187230982Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:55:31.191645 coreos-metadata[2129]: Jan 23 23:55:31.188 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:55:31.191645 coreos-metadata[2129]: Jan 23 23:55:31.191 INFO Fetch successful Jan 23 23:55:31.191645 coreos-metadata[2129]: Jan 23 23:55:31.191 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:55:31.191146 polkitd[2168]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:55:31.191290 polkitd[2168]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:55:31.198412 coreos-metadata[2129]: Jan 23 23:55:31.197 INFO Fetch successful Jan 23 23:55:31.198557 containerd[2029]: time="2026-01-23T23:55:31.195122782Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:55:31.198557 containerd[2029]: time="2026-01-23T23:55:31.195293194Z" level=info msg="Connect containerd service" Jan 23 23:55:31.198557 containerd[2029]: time="2026-01-23T23:55:31.195365278Z" level=info msg="using legacy CRI server" Jan 23 23:55:31.198557 containerd[2029]: time="2026-01-23T23:55:31.195384658Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:55:31.198557 containerd[2029]: time="2026-01-23T23:55:31.195551278Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:55:31.207117 unknown[2129]: wrote ssh authorized keys file for user: core Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212057711Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212420795Z" level=info msg="Start subscribing containerd event" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212502611Z" level=info msg="Start recovering state" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212627159Z" level=info msg="Start event monitor" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212651771Z" level=info msg="Start snapshots syncer" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212672783Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:55:31.213435 containerd[2029]: time="2026-01-23T23:55:31.212692223Z" level=info msg="Start streaming server" Jan 23 23:55:31.221168 polkitd[2168]: Finished loading, compiling and executing 2 rules Jan 23 23:55:31.223700 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:55:31.228271 containerd[2029]: time="2026-01-23T23:55:31.223252703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:55:31.228271 containerd[2029]: time="2026-01-23T23:55:31.223425107Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:55:31.228271 containerd[2029]: time="2026-01-23T23:55:31.225349079Z" level=info msg="containerd successfully booted in 0.592030s" Jan 23 23:55:31.232831 dbus-daemon[1992]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:55:31.233178 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:55:31.240577 polkitd[2168]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:55:31.260011 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO http_proxy: Jan 23 23:55:31.300219 systemd-hostnamed[2027]: Hostname set to (transient) Jan 23 23:55:31.316030 update-ssh-keys[2204]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:31.308324 systemd-resolved[1943]: System hostname changed to 'ip-172-31-20-253'. Jan 23 23:55:31.308566 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:55:31.326146 systemd[1]: Finished sshkeys.service. Jan 23 23:55:31.359589 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO no_proxy: Jan 23 23:55:31.459703 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:55:31.562086 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:55:31.661530 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO Agent will take identity from EC2 Jan 23 23:55:31.763621 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:31.864138 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:31.966018 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:32.064444 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:55:32.166653 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:55:32.236956 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:55:32.236956 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [Registrar] Starting registrar module Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:31 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:32 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:32 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:32 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:55:32.237185 amazon-ssm-agent[2159]: 2026-01-23 23:55:32 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:55:32.268219 amazon-ssm-agent[2159]: 2026-01-23 23:55:32 INFO [CredentialRefresher] Next credential rotation will be in 31.516657159366666 minutes Jan 23 23:55:32.408046 tar[2028]: linux-arm64/README.md Jan 23 23:55:32.428060 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:55:32.643205 sshd_keygen[2030]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:55:32.685833 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:55:32.697561 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:55:32.705019 systemd[1]: Started sshd@0-172.31.20.253:22-4.153.228.146:58962.service - OpenSSH per-connection server daemon (4.153.228.146:58962). Jan 23 23:55:32.736774 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:55:32.739571 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:55:32.754276 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:55:32.794604 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:55:32.810522 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:55:32.822766 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:55:32.825823 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:55:32.972953 ntpd[1996]: Listen normally on 6 eth0 [fe80::44e:a5ff:feb7:2489%2]:123 Jan 23 23:55:32.974046 ntpd[1996]: 23 Jan 23:55:32 ntpd[1996]: Listen normally on 6 eth0 [fe80::44e:a5ff:feb7:2489%2]:123 Jan 23 23:55:33.036300 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:33.040814 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:55:33.042316 (kubelet)[2242]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:33.051071 systemd[1]: Startup finished in 1.202s (kernel) + 9.111s (initrd) + 9.171s (userspace) = 19.486s. Jan 23 23:55:33.266944 sshd[2228]: Accepted publickey for core from 4.153.228.146 port 58962 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:33.271709 sshd[2228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:33.275403 amazon-ssm-agent[2159]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:55:33.310047 systemd-logind[2003]: New session 1 of user core. Jan 23 23:55:33.315094 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:55:33.324518 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:55:33.375658 amazon-ssm-agent[2159]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2252) started Jan 23 23:55:33.381481 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:55:33.396545 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:55:33.420900 (systemd)[2258]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:55:33.477623 amazon-ssm-agent[2159]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:55:33.723685 systemd[2258]: Queued start job for default target default.target. Jan 23 23:55:33.730829 systemd[2258]: Created slice app.slice - User Application Slice. Jan 23 23:55:33.730888 systemd[2258]: Reached target paths.target - Paths. Jan 23 23:55:33.730922 systemd[2258]: Reached target timers.target - Timers. Jan 23 23:55:33.735183 systemd[2258]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:55:33.761242 systemd[2258]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:55:33.761682 systemd[2258]: Reached target sockets.target - Sockets. Jan 23 23:55:33.761727 systemd[2258]: Reached target basic.target - Basic System. Jan 23 23:55:33.761843 systemd[2258]: Reached target default.target - Main User Target. Jan 23 23:55:33.761916 systemd[2258]: Startup finished in 323ms. Jan 23 23:55:33.761951 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:55:33.770332 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:55:34.017208 kubelet[2242]: E0123 23:55:34.016950 2242 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:34.023429 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:34.024052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:34.026154 systemd[1]: kubelet.service: Consumed 1.418s CPU time. Jan 23 23:55:34.143586 systemd[1]: Started sshd@1-172.31.20.253:22-4.153.228.146:58974.service - OpenSSH per-connection server daemon (4.153.228.146:58974). Jan 23 23:55:34.651195 sshd[2276]: Accepted publickey for core from 4.153.228.146 port 58974 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:34.653907 sshd[2276]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:34.662698 systemd-logind[2003]: New session 2 of user core. Jan 23 23:55:34.673256 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:55:35.005885 sshd[2276]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:35.011913 systemd-logind[2003]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:55:35.012582 systemd[1]: sshd@1-172.31.20.253:22-4.153.228.146:58974.service: Deactivated successfully. Jan 23 23:55:35.015809 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:55:35.021094 systemd-logind[2003]: Removed session 2. Jan 23 23:55:35.120451 systemd[1]: Started sshd@2-172.31.20.253:22-4.153.228.146:34522.service - OpenSSH per-connection server daemon (4.153.228.146:34522). Jan 23 23:55:35.651587 sshd[2283]: Accepted publickey for core from 4.153.228.146 port 34522 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:35.654420 sshd[2283]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:35.663477 systemd-logind[2003]: New session 3 of user core. Jan 23 23:55:35.671250 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:55:36.023832 sshd[2283]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:36.031172 systemd-logind[2003]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:55:36.031324 systemd[1]: sshd@2-172.31.20.253:22-4.153.228.146:34522.service: Deactivated successfully. Jan 23 23:55:36.035373 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:55:36.037116 systemd-logind[2003]: Removed session 3. Jan 23 23:55:36.118491 systemd[1]: Started sshd@3-172.31.20.253:22-4.153.228.146:34534.service - OpenSSH per-connection server daemon (4.153.228.146:34534). Jan 23 23:55:36.613429 sshd[2291]: Accepted publickey for core from 4.153.228.146 port 34534 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:36.616108 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:36.624290 systemd-logind[2003]: New session 4 of user core. Jan 23 23:55:36.631248 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:55:36.968596 sshd[2291]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:36.529485 systemd-resolved[1943]: Clock change detected. Flushing caches. Jan 23 23:55:36.538787 systemd-journald[1576]: Time jumped backwards, rotating. Jan 23 23:55:36.532582 systemd[1]: sshd@3-172.31.20.253:22-4.153.228.146:34534.service: Deactivated successfully. Jan 23 23:55:36.536338 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:55:36.541116 systemd-logind[2003]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:55:36.544738 systemd-logind[2003]: Removed session 4. Jan 23 23:55:36.621944 systemd[1]: Started sshd@4-172.31.20.253:22-4.153.228.146:34540.service - OpenSSH per-connection server daemon (4.153.228.146:34540). Jan 23 23:55:37.113040 sshd[2299]: Accepted publickey for core from 4.153.228.146 port 34540 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:37.115631 sshd[2299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:37.124713 systemd-logind[2003]: New session 5 of user core. Jan 23 23:55:37.127727 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:55:37.409517 sudo[2302]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:55:37.410160 sudo[2302]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:37.427633 sudo[2302]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:37.505405 sshd[2299]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:37.511992 systemd[1]: sshd@4-172.31.20.253:22-4.153.228.146:34540.service: Deactivated successfully. Jan 23 23:55:37.512672 systemd-logind[2003]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:55:37.515769 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:55:37.519857 systemd-logind[2003]: Removed session 5. Jan 23 23:55:37.615067 systemd[1]: Started sshd@5-172.31.20.253:22-4.153.228.146:34556.service - OpenSSH per-connection server daemon (4.153.228.146:34556). Jan 23 23:55:38.146185 sshd[2307]: Accepted publickey for core from 4.153.228.146 port 34556 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:38.148991 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:38.156522 systemd-logind[2003]: New session 6 of user core. Jan 23 23:55:38.165775 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:55:38.446632 sudo[2311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:55:38.447859 sudo[2311]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:38.455057 sudo[2311]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:38.465748 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:55:38.466393 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:38.500600 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:38.502965 auditctl[2314]: No rules Jan 23 23:55:38.503711 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:55:38.504085 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:38.513382 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:38.570122 augenrules[2332]: No rules Jan 23 23:55:38.571756 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:38.576608 sudo[2310]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:38.660855 sshd[2307]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:38.666077 systemd-logind[2003]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:55:38.667377 systemd[1]: sshd@5-172.31.20.253:22-4.153.228.146:34556.service: Deactivated successfully. Jan 23 23:55:38.670134 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:55:38.675719 systemd-logind[2003]: Removed session 6. Jan 23 23:55:38.742610 systemd[1]: Started sshd@6-172.31.20.253:22-4.153.228.146:34560.service - OpenSSH per-connection server daemon (4.153.228.146:34560). Jan 23 23:55:39.248437 sshd[2340]: Accepted publickey for core from 4.153.228.146 port 34560 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:39.251175 sshd[2340]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:39.258948 systemd-logind[2003]: New session 7 of user core. Jan 23 23:55:39.272692 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:55:39.527817 sudo[2343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:55:39.529000 sudo[2343]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:40.176321 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:55:40.178256 (dockerd)[2360]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:55:40.722995 dockerd[2360]: time="2026-01-23T23:55:40.722893540Z" level=info msg="Starting up" Jan 23 23:55:40.941513 dockerd[2360]: time="2026-01-23T23:55:40.941256005Z" level=info msg="Loading containers: start." Jan 23 23:55:41.138525 kernel: Initializing XFRM netlink socket Jan 23 23:55:41.205957 (udev-worker)[2384]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:41.303744 systemd-networkd[1942]: docker0: Link UP Jan 23 23:55:41.332021 dockerd[2360]: time="2026-01-23T23:55:41.331869435Z" level=info msg="Loading containers: done." Jan 23 23:55:41.355330 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1511084813-merged.mount: Deactivated successfully. Jan 23 23:55:41.361833 dockerd[2360]: time="2026-01-23T23:55:41.361700908Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:55:41.362469 dockerd[2360]: time="2026-01-23T23:55:41.362043052Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:55:41.362469 dockerd[2360]: time="2026-01-23T23:55:41.362246104Z" level=info msg="Daemon has completed initialization" Jan 23 23:55:41.421230 dockerd[2360]: time="2026-01-23T23:55:41.420809728Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:55:41.421955 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:55:42.569237 containerd[2029]: time="2026-01-23T23:55:42.568775105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:55:43.164680 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount733753863.mount: Deactivated successfully. Jan 23 23:55:43.813550 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:43.824819 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:44.245791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:44.255991 (kubelet)[2568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:44.344111 kubelet[2568]: E0123 23:55:44.343971 2568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:44.351775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:44.352153 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:44.814014 containerd[2029]: time="2026-01-23T23:55:44.813953277Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:44.816487 containerd[2029]: time="2026-01-23T23:55:44.816171297Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:55:44.817314 containerd[2029]: time="2026-01-23T23:55:44.816692505Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:44.822749 containerd[2029]: time="2026-01-23T23:55:44.822666465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:44.825564 containerd[2029]: time="2026-01-23T23:55:44.825082269Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.256244908s" Jan 23 23:55:44.825564 containerd[2029]: time="2026-01-23T23:55:44.825148233Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:55:44.826369 containerd[2029]: time="2026-01-23T23:55:44.826215453Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:55:46.294376 containerd[2029]: time="2026-01-23T23:55:46.294292040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:46.295578 containerd[2029]: time="2026-01-23T23:55:46.295530008Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:55:46.297481 containerd[2029]: time="2026-01-23T23:55:46.297134636Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:46.303032 containerd[2029]: time="2026-01-23T23:55:46.302946908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:46.305694 containerd[2029]: time="2026-01-23T23:55:46.305442632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.479163243s" Jan 23 23:55:46.305694 containerd[2029]: time="2026-01-23T23:55:46.305535200Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:55:46.307224 containerd[2029]: time="2026-01-23T23:55:46.306932492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:55:47.444295 containerd[2029]: time="2026-01-23T23:55:47.444210862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:47.447673 containerd[2029]: time="2026-01-23T23:55:47.447613930Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:55:47.449093 containerd[2029]: time="2026-01-23T23:55:47.449013310Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:47.456427 containerd[2029]: time="2026-01-23T23:55:47.456350446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:47.460142 containerd[2029]: time="2026-01-23T23:55:47.459956614Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.152964854s" Jan 23 23:55:47.460142 containerd[2029]: time="2026-01-23T23:55:47.460013386Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:55:47.461030 containerd[2029]: time="2026-01-23T23:55:47.460733950Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:55:48.748575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3912623896.mount: Deactivated successfully. Jan 23 23:55:49.340047 containerd[2029]: time="2026-01-23T23:55:49.339957755Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.342006 containerd[2029]: time="2026-01-23T23:55:49.341684135Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:55:49.343153 containerd[2029]: time="2026-01-23T23:55:49.343095671Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.349480 containerd[2029]: time="2026-01-23T23:55:49.348174575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.349818 containerd[2029]: time="2026-01-23T23:55:49.349771823Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.888981617s" Jan 23 23:55:49.349946 containerd[2029]: time="2026-01-23T23:55:49.349916243Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:55:49.350988 containerd[2029]: time="2026-01-23T23:55:49.350934407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:55:49.883672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2275798441.mount: Deactivated successfully. Jan 23 23:55:51.175336 containerd[2029]: time="2026-01-23T23:55:51.174443424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.179503 containerd[2029]: time="2026-01-23T23:55:51.177551256Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:55:51.184015 containerd[2029]: time="2026-01-23T23:55:51.183940392Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.193094 containerd[2029]: time="2026-01-23T23:55:51.193029660Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.195747 containerd[2029]: time="2026-01-23T23:55:51.195689748Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.844626365s" Jan 23 23:55:51.195946 containerd[2029]: time="2026-01-23T23:55:51.195913620Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:55:51.197864 containerd[2029]: time="2026-01-23T23:55:51.197797056Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:55:51.719708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount847818575.mount: Deactivated successfully. Jan 23 23:55:51.732368 containerd[2029]: time="2026-01-23T23:55:51.732279003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.735029 containerd[2029]: time="2026-01-23T23:55:51.734667147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:55:51.738480 containerd[2029]: time="2026-01-23T23:55:51.737085879Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.743705 containerd[2029]: time="2026-01-23T23:55:51.743642703Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.745360 containerd[2029]: time="2026-01-23T23:55:51.745295799Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 547.436895ms" Jan 23 23:55:51.745360 containerd[2029]: time="2026-01-23T23:55:51.745354143Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:55:51.746737 containerd[2029]: time="2026-01-23T23:55:51.746679003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:55:52.282046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566127276.mount: Deactivated successfully. Jan 23 23:55:54.564028 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:55:54.572831 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:55.113564 containerd[2029]: time="2026-01-23T23:55:55.113099440Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.138571 containerd[2029]: time="2026-01-23T23:55:55.138493696Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:55:55.176429 containerd[2029]: time="2026-01-23T23:55:55.176301916Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.215235 containerd[2029]: time="2026-01-23T23:55:55.215120968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.218594 containerd[2029]: time="2026-01-23T23:55:55.218528740Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.471787781s" Jan 23 23:55:55.218917 containerd[2029]: time="2026-01-23T23:55:55.218736004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:55:55.684221 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:55.705194 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:55.827506 kubelet[2717]: E0123 23:55:55.826426 2717 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:55.835204 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:55.835644 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:00.894089 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:56:03.490926 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:03.508936 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:03.569413 systemd[1]: Reloading requested from client PID 2747 ('systemctl') (unit session-7.scope)... Jan 23 23:56:03.569478 systemd[1]: Reloading... Jan 23 23:56:03.826508 zram_generator::config[2788]: No configuration found. Jan 23 23:56:04.069317 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:04.246622 systemd[1]: Reloading finished in 676 ms. Jan 23 23:56:04.335814 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:56:04.336022 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:56:04.337205 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:04.354100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:04.687430 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:04.705019 (kubelet)[2849]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:04.781034 kubelet[2849]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:04.781034 kubelet[2849]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:04.781034 kubelet[2849]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:04.781636 kubelet[2849]: I0123 23:56:04.781144 2849 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:06.392895 kubelet[2849]: I0123 23:56:06.392821 2849 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:56:06.392895 kubelet[2849]: I0123 23:56:06.392875 2849 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:06.393650 kubelet[2849]: I0123 23:56:06.393356 2849 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:56:06.434076 kubelet[2849]: E0123 23:56:06.434026 2849 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.20.253:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:06.439824 kubelet[2849]: I0123 23:56:06.439614 2849 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:06.449270 kubelet[2849]: E0123 23:56:06.448507 2849 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:06.449270 kubelet[2849]: I0123 23:56:06.448564 2849 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:06.458436 kubelet[2849]: I0123 23:56:06.457892 2849 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:06.459324 kubelet[2849]: I0123 23:56:06.459256 2849 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:06.459751 kubelet[2849]: I0123 23:56:06.459427 2849 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:06.460137 kubelet[2849]: I0123 23:56:06.460114 2849 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:06.460252 kubelet[2849]: I0123 23:56:06.460233 2849 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:56:06.460981 kubelet[2849]: I0123 23:56:06.460675 2849 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:06.466401 kubelet[2849]: I0123 23:56:06.466366 2849 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:56:06.466605 kubelet[2849]: I0123 23:56:06.466583 2849 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:06.466723 kubelet[2849]: I0123 23:56:06.466705 2849 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:56:06.467277 kubelet[2849]: I0123 23:56:06.466817 2849 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:06.473112 kubelet[2849]: W0123 23:56:06.473005 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-253&limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:06.473274 kubelet[2849]: E0123 23:56:06.473124 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-253&limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:06.473939 kubelet[2849]: W0123 23:56:06.473865 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:06.474065 kubelet[2849]: E0123 23:56:06.473953 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:06.475498 kubelet[2849]: I0123 23:56:06.474113 2849 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:06.476077 kubelet[2849]: I0123 23:56:06.476030 2849 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:56:06.476308 kubelet[2849]: W0123 23:56:06.476272 2849 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:56:06.478611 kubelet[2849]: I0123 23:56:06.478567 2849 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:06.478779 kubelet[2849]: I0123 23:56:06.478631 2849 server.go:1287] "Started kubelet" Jan 23 23:56:06.485515 kubelet[2849]: I0123 23:56:06.484334 2849 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:06.496127 kubelet[2849]: I0123 23:56:06.496077 2849 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:06.497261 kubelet[2849]: E0123 23:56:06.497204 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-253\" not found" Jan 23 23:56:06.497261 kubelet[2849]: I0123 23:56:06.496600 2849 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:06.499095 kubelet[2849]: I0123 23:56:06.499056 2849 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:56:06.500971 kubelet[2849]: I0123 23:56:06.500892 2849 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:06.501532 kubelet[2849]: I0123 23:56:06.501479 2849 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:06.508159 kubelet[2849]: I0123 23:56:06.508108 2849 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:06.511256 kubelet[2849]: I0123 23:56:06.496560 2849 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:06.511902 kubelet[2849]: I0123 23:56:06.511844 2849 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:06.511902 kubelet[2849]: E0123 23:56:06.512201 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": dial tcp 172.31.20.253:6443: connect: connection refused" interval="200ms" Jan 23 23:56:06.511902 kubelet[2849]: E0123 23:56:06.512485 2849 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.253:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.253:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-253.188d81749d1d2f18 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-253,UID:ip-172-31-20-253,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-253,},FirstTimestamp:2026-01-23 23:56:06.478597912 +0000 UTC m=+1.766812030,LastTimestamp:2026-01-23 23:56:06.478597912 +0000 UTC m=+1.766812030,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-253,}" Jan 23 23:56:06.511902 kubelet[2849]: I0123 23:56:06.513213 2849 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:56:06.511902 kubelet[2849]: I0123 23:56:06.513406 2849 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:06.517477 kubelet[2849]: W0123 23:56:06.515512 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:06.517477 kubelet[2849]: E0123 23:56:06.515606 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:06.517477 kubelet[2849]: E0123 23:56:06.516939 2849 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:06.519142 kubelet[2849]: I0123 23:56:06.519086 2849 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:56:06.531729 kubelet[2849]: I0123 23:56:06.531662 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:06.536616 kubelet[2849]: I0123 23:56:06.536568 2849 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:06.536777 kubelet[2849]: I0123 23:56:06.536759 2849 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:56:06.536941 kubelet[2849]: I0123 23:56:06.536921 2849 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:06.537073 kubelet[2849]: I0123 23:56:06.537053 2849 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:56:06.537301 kubelet[2849]: E0123 23:56:06.537240 2849 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:06.544955 kubelet[2849]: W0123 23:56:06.544869 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:06.545158 kubelet[2849]: E0123 23:56:06.544967 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:06.557239 kubelet[2849]: I0123 23:56:06.557206 2849 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:06.557409 kubelet[2849]: I0123 23:56:06.557389 2849 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:06.557571 kubelet[2849]: I0123 23:56:06.557553 2849 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:06.559959 kubelet[2849]: I0123 23:56:06.559928 2849 policy_none.go:49] "None policy: Start" Jan 23 23:56:06.560091 kubelet[2849]: I0123 23:56:06.560071 2849 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:06.560571 kubelet[2849]: I0123 23:56:06.560195 2849 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:06.571204 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:56:06.589987 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:56:06.598331 kubelet[2849]: E0123 23:56:06.597970 2849 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-253\" not found" Jan 23 23:56:06.602800 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:56:06.619142 kubelet[2849]: I0123 23:56:06.619080 2849 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:56:06.620033 kubelet[2849]: I0123 23:56:06.619377 2849 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:06.620033 kubelet[2849]: I0123 23:56:06.619409 2849 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:06.620033 kubelet[2849]: I0123 23:56:06.619953 2849 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:06.622170 kubelet[2849]: E0123 23:56:06.622114 2849 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:06.622306 kubelet[2849]: E0123 23:56:06.622187 2849 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-253\" not found" Jan 23 23:56:06.656179 systemd[1]: Created slice kubepods-burstable-pod8b24af9ba9ed2bb2e74795bb40f82c46.slice - libcontainer container kubepods-burstable-pod8b24af9ba9ed2bb2e74795bb40f82c46.slice. Jan 23 23:56:06.671337 kubelet[2849]: E0123 23:56:06.670988 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:06.675992 systemd[1]: Created slice kubepods-burstable-pod92a04533c8206c9d2166ff913aa66cdf.slice - libcontainer container kubepods-burstable-pod92a04533c8206c9d2166ff913aa66cdf.slice. Jan 23 23:56:06.682740 kubelet[2849]: E0123 23:56:06.681496 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:06.688406 systemd[1]: Created slice kubepods-burstable-pod75e36211074ef53eebf26dcdce33a8e6.slice - libcontainer container kubepods-burstable-pod75e36211074ef53eebf26dcdce33a8e6.slice. Jan 23 23:56:06.692563 kubelet[2849]: E0123 23:56:06.692513 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:06.713497 kubelet[2849]: I0123 23:56:06.713406 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:06.713692 kubelet[2849]: I0123 23:56:06.713516 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:06.713692 kubelet[2849]: I0123 23:56:06.713565 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:06.713692 kubelet[2849]: I0123 23:56:06.713607 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:06.713692 kubelet[2849]: I0123 23:56:06.713649 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:06.713692 kubelet[2849]: I0123 23:56:06.713686 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-ca-certs\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:06.714016 kubelet[2849]: I0123 23:56:06.713721 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:06.714016 kubelet[2849]: I0123 23:56:06.713757 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:06.714016 kubelet[2849]: I0123 23:56:06.713799 2849 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75e36211074ef53eebf26dcdce33a8e6-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-253\" (UID: \"75e36211074ef53eebf26dcdce33a8e6\") " pod="kube-system/kube-scheduler-ip-172-31-20-253" Jan 23 23:56:06.714429 kubelet[2849]: E0123 23:56:06.714363 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": dial tcp 172.31.20.253:6443: connect: connection refused" interval="400ms" Jan 23 23:56:06.722129 kubelet[2849]: I0123 23:56:06.722068 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:06.722860 kubelet[2849]: E0123 23:56:06.722799 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.253:6443/api/v1/nodes\": dial tcp 172.31.20.253:6443: connect: connection refused" node="ip-172-31-20-253" Jan 23 23:56:06.925414 kubelet[2849]: I0123 23:56:06.925289 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:06.926155 kubelet[2849]: E0123 23:56:06.925775 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.253:6443/api/v1/nodes\": dial tcp 172.31.20.253:6443: connect: connection refused" node="ip-172-31-20-253" Jan 23 23:56:06.972560 containerd[2029]: time="2026-01-23T23:56:06.972490543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-253,Uid:8b24af9ba9ed2bb2e74795bb40f82c46,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:06.983691 containerd[2029]: time="2026-01-23T23:56:06.983525875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-253,Uid:92a04533c8206c9d2166ff913aa66cdf,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:06.994442 containerd[2029]: time="2026-01-23T23:56:06.994037935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-253,Uid:75e36211074ef53eebf26dcdce33a8e6,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:07.115769 kubelet[2849]: E0123 23:56:07.115707 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": dial tcp 172.31.20.253:6443: connect: connection refused" interval="800ms" Jan 23 23:56:07.328204 kubelet[2849]: I0123 23:56:07.328160 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:07.328705 kubelet[2849]: E0123 23:56:07.328660 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.253:6443/api/v1/nodes\": dial tcp 172.31.20.253:6443: connect: connection refused" node="ip-172-31-20-253" Jan 23 23:56:07.439542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2956750876.mount: Deactivated successfully. Jan 23 23:56:07.445520 containerd[2029]: time="2026-01-23T23:56:07.444713333Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.447315 containerd[2029]: time="2026-01-23T23:56:07.447263885Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:56:07.452488 containerd[2029]: time="2026-01-23T23:56:07.451141349Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.452622 containerd[2029]: time="2026-01-23T23:56:07.452514461Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:07.454304 containerd[2029]: time="2026-01-23T23:56:07.454227797Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.455901 containerd[2029]: time="2026-01-23T23:56:07.455863433Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:07.459042 containerd[2029]: time="2026-01-23T23:56:07.458991605Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.464094 containerd[2029]: time="2026-01-23T23:56:07.464018261Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 469.852946ms" Jan 23 23:56:07.467430 containerd[2029]: time="2026-01-23T23:56:07.467352821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.470263 containerd[2029]: time="2026-01-23T23:56:07.470209409Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 497.600114ms" Jan 23 23:56:07.476382 containerd[2029]: time="2026-01-23T23:56:07.476263001Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 492.624554ms" Jan 23 23:56:07.668410 containerd[2029]: time="2026-01-23T23:56:07.667295526Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.671626 containerd[2029]: time="2026-01-23T23:56:07.669903378Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.671626 containerd[2029]: time="2026-01-23T23:56:07.669948714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.671978 containerd[2029]: time="2026-01-23T23:56:07.671803158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.685501 containerd[2029]: time="2026-01-23T23:56:07.683770830Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.685501 containerd[2029]: time="2026-01-23T23:56:07.684205794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.685501 containerd[2029]: time="2026-01-23T23:56:07.684532014Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.685501 containerd[2029]: time="2026-01-23T23:56:07.684672198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.687751 containerd[2029]: time="2026-01-23T23:56:07.687170502Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.687751 containerd[2029]: time="2026-01-23T23:56:07.687226506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.688506 kubelet[2849]: W0123 23:56:07.688392 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-253&limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:07.689352 kubelet[2849]: E0123 23:56:07.689239 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.20.253:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-253&limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:07.689878 containerd[2029]: time="2026-01-23T23:56:07.689206566Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.689878 containerd[2029]: time="2026-01-23T23:56:07.689425914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.725827 systemd[1]: Started cri-containerd-5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4.scope - libcontainer container 5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4. Jan 23 23:56:07.738696 kubelet[2849]: W0123 23:56:07.737966 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:07.739515 kubelet[2849]: E0123 23:56:07.738936 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.20.253:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:07.742921 systemd[1]: Started cri-containerd-b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f.scope - libcontainer container b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f. Jan 23 23:56:07.761763 systemd[1]: Started cri-containerd-b45850f4e5a883164c3638d72a540ef7004a08db8521c8028b37ec6f5fe1b85c.scope - libcontainer container b45850f4e5a883164c3638d72a540ef7004a08db8521c8028b37ec6f5fe1b85c. Jan 23 23:56:07.852705 containerd[2029]: time="2026-01-23T23:56:07.852650143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-253,Uid:92a04533c8206c9d2166ff913aa66cdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4\"" Jan 23 23:56:07.865564 containerd[2029]: time="2026-01-23T23:56:07.865249567Z" level=info msg="CreateContainer within sandbox \"5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:56:07.887245 containerd[2029]: time="2026-01-23T23:56:07.887181619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-253,Uid:8b24af9ba9ed2bb2e74795bb40f82c46,Namespace:kube-system,Attempt:0,} returns sandbox id \"b45850f4e5a883164c3638d72a540ef7004a08db8521c8028b37ec6f5fe1b85c\"" Jan 23 23:56:07.895168 containerd[2029]: time="2026-01-23T23:56:07.894946687Z" level=info msg="CreateContainer within sandbox \"b45850f4e5a883164c3638d72a540ef7004a08db8521c8028b37ec6f5fe1b85c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:56:07.901272 containerd[2029]: time="2026-01-23T23:56:07.901209175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-253,Uid:75e36211074ef53eebf26dcdce33a8e6,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f\"" Jan 23 23:56:07.908090 containerd[2029]: time="2026-01-23T23:56:07.908005375Z" level=info msg="CreateContainer within sandbox \"5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3\"" Jan 23 23:56:07.909852 kubelet[2849]: W0123 23:56:07.909673 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:07.909852 kubelet[2849]: E0123 23:56:07.909795 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.20.253:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:07.910531 containerd[2029]: time="2026-01-23T23:56:07.910424215Z" level=info msg="CreateContainer within sandbox \"b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:56:07.910665 containerd[2029]: time="2026-01-23T23:56:07.910443655Z" level=info msg="StartContainer for \"c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3\"" Jan 23 23:56:07.917330 kubelet[2849]: E0123 23:56:07.917245 2849 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": dial tcp 172.31.20.253:6443: connect: connection refused" interval="1.6s" Jan 23 23:56:07.933402 containerd[2029]: time="2026-01-23T23:56:07.931420399Z" level=info msg="CreateContainer within sandbox \"b45850f4e5a883164c3638d72a540ef7004a08db8521c8028b37ec6f5fe1b85c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"04e9d54cad306baddd9402693cf2268a51d307fc4c63936d9d728d5cbf22d2f0\"" Jan 23 23:56:07.933908 containerd[2029]: time="2026-01-23T23:56:07.933831535Z" level=info msg="StartContainer for \"04e9d54cad306baddd9402693cf2268a51d307fc4c63936d9d728d5cbf22d2f0\"" Jan 23 23:56:07.945846 containerd[2029]: time="2026-01-23T23:56:07.945782468Z" level=info msg="CreateContainer within sandbox \"b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50\"" Jan 23 23:56:07.947591 containerd[2029]: time="2026-01-23T23:56:07.947239376Z" level=info msg="StartContainer for \"e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50\"" Jan 23 23:56:07.984653 systemd[1]: Started cri-containerd-c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3.scope - libcontainer container c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3. Jan 23 23:56:08.020307 systemd[1]: Started cri-containerd-04e9d54cad306baddd9402693cf2268a51d307fc4c63936d9d728d5cbf22d2f0.scope - libcontainer container 04e9d54cad306baddd9402693cf2268a51d307fc4c63936d9d728d5cbf22d2f0. Jan 23 23:56:08.061756 systemd[1]: Started cri-containerd-e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50.scope - libcontainer container e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50. Jan 23 23:56:08.088385 kubelet[2849]: W0123 23:56:08.088261 2849 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.253:6443: connect: connection refused Jan 23 23:56:08.088551 kubelet[2849]: E0123 23:56:08.088387 2849 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.20.253:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.20.253:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:56:08.134560 kubelet[2849]: I0123 23:56:08.133738 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:08.134560 kubelet[2849]: E0123 23:56:08.134231 2849 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.20.253:6443/api/v1/nodes\": dial tcp 172.31.20.253:6443: connect: connection refused" node="ip-172-31-20-253" Jan 23 23:56:08.147366 containerd[2029]: time="2026-01-23T23:56:08.145407677Z" level=info msg="StartContainer for \"c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3\" returns successfully" Jan 23 23:56:08.163494 containerd[2029]: time="2026-01-23T23:56:08.158245865Z" level=info msg="StartContainer for \"04e9d54cad306baddd9402693cf2268a51d307fc4c63936d9d728d5cbf22d2f0\" returns successfully" Jan 23 23:56:08.211051 containerd[2029]: time="2026-01-23T23:56:08.210844169Z" level=info msg="StartContainer for \"e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50\" returns successfully" Jan 23 23:56:08.563735 kubelet[2849]: E0123 23:56:08.563685 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:08.567395 kubelet[2849]: E0123 23:56:08.567255 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:08.571366 kubelet[2849]: E0123 23:56:08.571317 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:09.575231 kubelet[2849]: E0123 23:56:09.575176 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:09.576076 kubelet[2849]: E0123 23:56:09.576028 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:09.739475 kubelet[2849]: I0123 23:56:09.737279 2849 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:12.339088 kubelet[2849]: E0123 23:56:12.338797 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:13.666115 kubelet[2849]: E0123 23:56:13.666032 2849 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:14.128558 kubelet[2849]: E0123 23:56:14.128500 2849 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-253\" not found" node="ip-172-31-20-253" Jan 23 23:56:14.207398 kubelet[2849]: I0123 23:56:14.207055 2849 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-253" Jan 23 23:56:14.207398 kubelet[2849]: E0123 23:56:14.207110 2849 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-20-253\": node \"ip-172-31-20-253\" not found" Jan 23 23:56:14.298505 kubelet[2849]: I0123 23:56:14.298430 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:14.334905 kubelet[2849]: E0123 23:56:14.334844 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-20-253\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:14.334905 kubelet[2849]: I0123 23:56:14.334895 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:14.347506 kubelet[2849]: E0123 23:56:14.347421 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-20-253\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:14.347506 kubelet[2849]: I0123 23:56:14.347500 2849 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-253" Jan 23 23:56:14.361752 kubelet[2849]: E0123 23:56:14.361683 2849 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-20-253\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-253" Jan 23 23:56:14.478622 kubelet[2849]: I0123 23:56:14.477631 2849 apiserver.go:52] "Watching apiserver" Jan 23 23:56:14.498305 kubelet[2849]: I0123 23:56:14.498240 2849 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:15.129606 update_engine[2004]: I20260123 23:56:15.129499 2004 update_attempter.cc:509] Updating boot flags... Jan 23 23:56:15.243180 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3138) Jan 23 23:56:15.723526 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3139) Jan 23 23:56:16.684859 systemd[1]: Reloading requested from client PID 3307 ('systemctl') (unit session-7.scope)... Jan 23 23:56:16.684885 systemd[1]: Reloading... Jan 23 23:56:16.875693 zram_generator::config[3354]: No configuration found. Jan 23 23:56:17.111951 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:17.315868 systemd[1]: Reloading finished in 630 ms. Jan 23 23:56:17.396275 kubelet[2849]: I0123 23:56:17.396023 2849 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:17.397261 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:17.414562 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:17.415015 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:17.415106 systemd[1]: kubelet.service: Consumed 2.614s CPU time, 133.2M memory peak, 0B memory swap peak. Jan 23 23:56:17.423005 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:17.746389 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:17.768043 (kubelet)[3407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:17.869478 kubelet[3407]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:17.869478 kubelet[3407]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:17.869478 kubelet[3407]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:17.869478 kubelet[3407]: I0123 23:56:17.868206 3407 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:17.881294 kubelet[3407]: I0123 23:56:17.881248 3407 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:56:17.881640 kubelet[3407]: I0123 23:56:17.881616 3407 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:17.882814 kubelet[3407]: I0123 23:56:17.882773 3407 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:56:17.889501 kubelet[3407]: I0123 23:56:17.889430 3407 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:56:17.895092 kubelet[3407]: I0123 23:56:17.895019 3407 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:17.902868 kubelet[3407]: E0123 23:56:17.902820 3407 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:17.903081 kubelet[3407]: I0123 23:56:17.903057 3407 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:17.908278 kubelet[3407]: I0123 23:56:17.908211 3407 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:17.908928 kubelet[3407]: I0123 23:56:17.908879 3407 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:17.909729 kubelet[3407]: I0123 23:56:17.909054 3407 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-253","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:17.909729 kubelet[3407]: I0123 23:56:17.909362 3407 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:17.909729 kubelet[3407]: I0123 23:56:17.909383 3407 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:56:17.909729 kubelet[3407]: I0123 23:56:17.909510 3407 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:17.910140 kubelet[3407]: I0123 23:56:17.910103 3407 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:56:17.911077 kubelet[3407]: I0123 23:56:17.910963 3407 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:17.911077 kubelet[3407]: I0123 23:56:17.911021 3407 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:56:17.911077 kubelet[3407]: I0123 23:56:17.911042 3407 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:17.916500 kubelet[3407]: I0123 23:56:17.916086 3407 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:17.918930 kubelet[3407]: I0123 23:56:17.918892 3407 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:56:17.920488 kubelet[3407]: I0123 23:56:17.919787 3407 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:17.920488 kubelet[3407]: I0123 23:56:17.919838 3407 server.go:1287] "Started kubelet" Jan 23 23:56:17.928397 kubelet[3407]: I0123 23:56:17.928355 3407 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:17.938495 kubelet[3407]: I0123 23:56:17.937530 3407 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:17.941011 kubelet[3407]: I0123 23:56:17.940940 3407 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:56:17.951551 kubelet[3407]: I0123 23:56:17.951435 3407 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:17.952486 kubelet[3407]: I0123 23:56:17.952047 3407 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:17.953303 kubelet[3407]: I0123 23:56:17.953265 3407 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:17.971395 kubelet[3407]: I0123 23:56:17.971355 3407 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:17.972554 kubelet[3407]: E0123 23:56:17.971925 3407 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-20-253\" not found" Jan 23 23:56:18.013802 kubelet[3407]: I0123 23:56:18.013650 3407 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:18.017892 kubelet[3407]: I0123 23:56:18.017561 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:18.019720 kubelet[3407]: I0123 23:56:18.019262 3407 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:18.025203 kubelet[3407]: I0123 23:56:18.025161 3407 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:56:18.025825 kubelet[3407]: I0123 23:56:18.025684 3407 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:18.034186 kubelet[3407]: I0123 23:56:18.034043 3407 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:18.034186 kubelet[3407]: I0123 23:56:18.034106 3407 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:56:18.034186 kubelet[3407]: I0123 23:56:18.034140 3407 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:18.035159 kubelet[3407]: I0123 23:56:18.034156 3407 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:56:18.038834 kubelet[3407]: E0123 23:56:18.038680 3407 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:18.052503 kubelet[3407]: I0123 23:56:18.052062 3407 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:56:18.055173 kubelet[3407]: E0123 23:56:18.055130 3407 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:18.144481 kubelet[3407]: E0123 23:56:18.142077 3407 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:56:18.152994 kubelet[3407]: I0123 23:56:18.152953 3407 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:18.152994 kubelet[3407]: I0123 23:56:18.152985 3407 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:18.153225 kubelet[3407]: I0123 23:56:18.153038 3407 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:18.153352 kubelet[3407]: I0123 23:56:18.153321 3407 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:56:18.153424 kubelet[3407]: I0123 23:56:18.153354 3407 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:56:18.153424 kubelet[3407]: I0123 23:56:18.153393 3407 policy_none.go:49] "None policy: Start" Jan 23 23:56:18.153424 kubelet[3407]: I0123 23:56:18.153411 3407 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:18.153668 kubelet[3407]: I0123 23:56:18.153431 3407 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:18.154093 kubelet[3407]: I0123 23:56:18.154063 3407 state_mem.go:75] "Updated machine memory state" Jan 23 23:56:18.175490 kubelet[3407]: I0123 23:56:18.174110 3407 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:56:18.175490 kubelet[3407]: I0123 23:56:18.174398 3407 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:18.175490 kubelet[3407]: I0123 23:56:18.174418 3407 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:18.175490 kubelet[3407]: I0123 23:56:18.175371 3407 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:18.187955 kubelet[3407]: E0123 23:56:18.187901 3407 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:18.305666 kubelet[3407]: I0123 23:56:18.305397 3407 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-20-253" Jan 23 23:56:18.323164 kubelet[3407]: I0123 23:56:18.323123 3407 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-20-253" Jan 23 23:56:18.323695 kubelet[3407]: I0123 23:56:18.323647 3407 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-20-253" Jan 23 23:56:18.343963 kubelet[3407]: I0123 23:56:18.343129 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-20-253" Jan 23 23:56:18.345113 kubelet[3407]: I0123 23:56:18.344818 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.347493 kubelet[3407]: I0123 23:56:18.346164 3407 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:18.421864 kubelet[3407]: I0123 23:56:18.421282 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-ca-certs\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:18.421864 kubelet[3407]: I0123 23:56:18.421390 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:18.421864 kubelet[3407]: I0123 23:56:18.421435 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8b24af9ba9ed2bb2e74795bb40f82c46-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-253\" (UID: \"8b24af9ba9ed2bb2e74795bb40f82c46\") " pod="kube-system/kube-apiserver-ip-172-31-20-253" Jan 23 23:56:18.421864 kubelet[3407]: I0123 23:56:18.421526 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.421864 kubelet[3407]: I0123 23:56:18.421567 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.422223 kubelet[3407]: I0123 23:56:18.421604 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.422223 kubelet[3407]: I0123 23:56:18.421643 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.422223 kubelet[3407]: I0123 23:56:18.421685 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92a04533c8206c9d2166ff913aa66cdf-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-253\" (UID: \"92a04533c8206c9d2166ff913aa66cdf\") " pod="kube-system/kube-controller-manager-ip-172-31-20-253" Jan 23 23:56:18.422223 kubelet[3407]: I0123 23:56:18.421728 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/75e36211074ef53eebf26dcdce33a8e6-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-253\" (UID: \"75e36211074ef53eebf26dcdce33a8e6\") " pod="kube-system/kube-scheduler-ip-172-31-20-253" Jan 23 23:56:18.914400 kubelet[3407]: I0123 23:56:18.914033 3407 apiserver.go:52] "Watching apiserver" Jan 23 23:56:19.014887 kubelet[3407]: I0123 23:56:19.014824 3407 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:19.093968 kubelet[3407]: I0123 23:56:19.093811 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-253" podStartSLOduration=1.093764403 podStartE2EDuration="1.093764403s" podCreationTimestamp="2026-01-23 23:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:19.091443687 +0000 UTC m=+1.314082952" watchObservedRunningTime="2026-01-23 23:56:19.093764403 +0000 UTC m=+1.316403632" Jan 23 23:56:19.094559 kubelet[3407]: I0123 23:56:19.094310 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-253" podStartSLOduration=1.094268631 podStartE2EDuration="1.094268631s" podCreationTimestamp="2026-01-23 23:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:19.071569443 +0000 UTC m=+1.294208672" watchObservedRunningTime="2026-01-23 23:56:19.094268631 +0000 UTC m=+1.316907872" Jan 23 23:56:19.141890 kubelet[3407]: I0123 23:56:19.141631 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-253" podStartSLOduration=1.141607983 podStartE2EDuration="1.141607983s" podCreationTimestamp="2026-01-23 23:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:19.117963687 +0000 UTC m=+1.340602952" watchObservedRunningTime="2026-01-23 23:56:19.141607983 +0000 UTC m=+1.364247224" Jan 23 23:56:22.867354 kubelet[3407]: I0123 23:56:22.867130 3407 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:56:22.868087 containerd[2029]: time="2026-01-23T23:56:22.868011106Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:56:22.870515 kubelet[3407]: I0123 23:56:22.869668 3407 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:56:23.849677 systemd[1]: Created slice kubepods-besteffort-pod991d69a1_85da_48b5_8fc6_2fd6604592be.slice - libcontainer container kubepods-besteffort-pod991d69a1_85da_48b5_8fc6_2fd6604592be.slice. Jan 23 23:56:23.857691 kubelet[3407]: I0123 23:56:23.857163 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/991d69a1-85da-48b5-8fc6-2fd6604592be-lib-modules\") pod \"kube-proxy-x2pkw\" (UID: \"991d69a1-85da-48b5-8fc6-2fd6604592be\") " pod="kube-system/kube-proxy-x2pkw" Jan 23 23:56:23.857691 kubelet[3407]: I0123 23:56:23.857241 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/991d69a1-85da-48b5-8fc6-2fd6604592be-kube-proxy\") pod \"kube-proxy-x2pkw\" (UID: \"991d69a1-85da-48b5-8fc6-2fd6604592be\") " pod="kube-system/kube-proxy-x2pkw" Jan 23 23:56:23.857691 kubelet[3407]: I0123 23:56:23.857280 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/991d69a1-85da-48b5-8fc6-2fd6604592be-xtables-lock\") pod \"kube-proxy-x2pkw\" (UID: \"991d69a1-85da-48b5-8fc6-2fd6604592be\") " pod="kube-system/kube-proxy-x2pkw" Jan 23 23:56:23.857691 kubelet[3407]: I0123 23:56:23.857319 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgz59\" (UniqueName: \"kubernetes.io/projected/991d69a1-85da-48b5-8fc6-2fd6604592be-kube-api-access-tgz59\") pod \"kube-proxy-x2pkw\" (UID: \"991d69a1-85da-48b5-8fc6-2fd6604592be\") " pod="kube-system/kube-proxy-x2pkw" Jan 23 23:56:24.021320 systemd[1]: Created slice kubepods-besteffort-podbaf4e5ea_03e1_4719_9f0f_9fd1f8521b40.slice - libcontainer container kubepods-besteffort-podbaf4e5ea_03e1_4719_9f0f_9fd1f8521b40.slice. Jan 23 23:56:24.059123 kubelet[3407]: I0123 23:56:24.058994 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/baf4e5ea-03e1-4719-9f0f-9fd1f8521b40-var-lib-calico\") pod \"tigera-operator-7dcd859c48-jqjdf\" (UID: \"baf4e5ea-03e1-4719-9f0f-9fd1f8521b40\") " pod="tigera-operator/tigera-operator-7dcd859c48-jqjdf" Jan 23 23:56:24.059123 kubelet[3407]: I0123 23:56:24.059059 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4cgq\" (UniqueName: \"kubernetes.io/projected/baf4e5ea-03e1-4719-9f0f-9fd1f8521b40-kube-api-access-t4cgq\") pod \"tigera-operator-7dcd859c48-jqjdf\" (UID: \"baf4e5ea-03e1-4719-9f0f-9fd1f8521b40\") " pod="tigera-operator/tigera-operator-7dcd859c48-jqjdf" Jan 23 23:56:24.167026 containerd[2029]: time="2026-01-23T23:56:24.166736072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2pkw,Uid:991d69a1-85da-48b5-8fc6-2fd6604592be,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:24.220159 containerd[2029]: time="2026-01-23T23:56:24.219440756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:24.220159 containerd[2029]: time="2026-01-23T23:56:24.219825620Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:24.220159 containerd[2029]: time="2026-01-23T23:56:24.219890336Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.220159 containerd[2029]: time="2026-01-23T23:56:24.220070840Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.270822 systemd[1]: Started cri-containerd-563e0776551697b8b67161146c2e1af6de398dfad0c2e4e2f519bb7efca79e5c.scope - libcontainer container 563e0776551697b8b67161146c2e1af6de398dfad0c2e4e2f519bb7efca79e5c. Jan 23 23:56:24.315377 containerd[2029]: time="2026-01-23T23:56:24.315326289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x2pkw,Uid:991d69a1-85da-48b5-8fc6-2fd6604592be,Namespace:kube-system,Attempt:0,} returns sandbox id \"563e0776551697b8b67161146c2e1af6de398dfad0c2e4e2f519bb7efca79e5c\"" Jan 23 23:56:24.322642 containerd[2029]: time="2026-01-23T23:56:24.322590093Z" level=info msg="CreateContainer within sandbox \"563e0776551697b8b67161146c2e1af6de398dfad0c2e4e2f519bb7efca79e5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:56:24.330344 containerd[2029]: time="2026-01-23T23:56:24.330281529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jqjdf,Uid:baf4e5ea-03e1-4719-9f0f-9fd1f8521b40,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:56:24.370034 containerd[2029]: time="2026-01-23T23:56:24.369974397Z" level=info msg="CreateContainer within sandbox \"563e0776551697b8b67161146c2e1af6de398dfad0c2e4e2f519bb7efca79e5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b8ddc1b4797adf02fedf1e91e90cf843176f709732e94ea28c7fb9c91962dbe\"" Jan 23 23:56:24.372871 containerd[2029]: time="2026-01-23T23:56:24.371640249Z" level=info msg="StartContainer for \"8b8ddc1b4797adf02fedf1e91e90cf843176f709732e94ea28c7fb9c91962dbe\"" Jan 23 23:56:24.400056 containerd[2029]: time="2026-01-23T23:56:24.398351013Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:24.400056 containerd[2029]: time="2026-01-23T23:56:24.399605145Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:24.400056 containerd[2029]: time="2026-01-23T23:56:24.399667857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.400056 containerd[2029]: time="2026-01-23T23:56:24.399831021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.433787 systemd[1]: Started cri-containerd-8b8ddc1b4797adf02fedf1e91e90cf843176f709732e94ea28c7fb9c91962dbe.scope - libcontainer container 8b8ddc1b4797adf02fedf1e91e90cf843176f709732e94ea28c7fb9c91962dbe. Jan 23 23:56:24.450797 systemd[1]: Started cri-containerd-09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0.scope - libcontainer container 09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0. Jan 23 23:56:24.534975 containerd[2029]: time="2026-01-23T23:56:24.534677782Z" level=info msg="StartContainer for \"8b8ddc1b4797adf02fedf1e91e90cf843176f709732e94ea28c7fb9c91962dbe\" returns successfully" Jan 23 23:56:24.554670 containerd[2029]: time="2026-01-23T23:56:24.554607490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-jqjdf,Uid:baf4e5ea-03e1-4719-9f0f-9fd1f8521b40,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0\"" Jan 23 23:56:24.560152 containerd[2029]: time="2026-01-23T23:56:24.559979638Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:56:25.648889 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1716732191.mount: Deactivated successfully. Jan 23 23:56:25.894933 kubelet[3407]: I0123 23:56:25.894840 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x2pkw" podStartSLOduration=2.894815593 podStartE2EDuration="2.894815593s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:25.137624841 +0000 UTC m=+7.360264154" watchObservedRunningTime="2026-01-23 23:56:25.894815593 +0000 UTC m=+8.117454810" Jan 23 23:56:26.441263 containerd[2029]: time="2026-01-23T23:56:26.441207035Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.444625 containerd[2029]: time="2026-01-23T23:56:26.444537011Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:56:26.447154 containerd[2029]: time="2026-01-23T23:56:26.447077111Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.452149 containerd[2029]: time="2026-01-23T23:56:26.452049947Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.454523 containerd[2029]: time="2026-01-23T23:56:26.453742055Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.893698805s" Jan 23 23:56:26.454523 containerd[2029]: time="2026-01-23T23:56:26.453821363Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:56:26.457929 containerd[2029]: time="2026-01-23T23:56:26.457871939Z" level=info msg="CreateContainer within sandbox \"09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:56:26.483189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount718904253.mount: Deactivated successfully. Jan 23 23:56:26.485713 containerd[2029]: time="2026-01-23T23:56:26.485640384Z" level=info msg="CreateContainer within sandbox \"09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297\"" Jan 23 23:56:26.489136 containerd[2029]: time="2026-01-23T23:56:26.487601040Z" level=info msg="StartContainer for \"dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297\"" Jan 23 23:56:26.541790 systemd[1]: Started cri-containerd-dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297.scope - libcontainer container dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297. Jan 23 23:56:26.586700 containerd[2029]: time="2026-01-23T23:56:26.586420812Z" level=info msg="StartContainer for \"dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297\" returns successfully" Jan 23 23:56:27.908187 kubelet[3407]: I0123 23:56:27.908095 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-jqjdf" podStartSLOduration=3.010692934 podStartE2EDuration="4.908048463s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="2026-01-23 23:56:24.558154774 +0000 UTC m=+6.780794003" lastFinishedPulling="2026-01-23 23:56:26.455510279 +0000 UTC m=+8.678149532" observedRunningTime="2026-01-23 23:56:27.149488907 +0000 UTC m=+9.372128172" watchObservedRunningTime="2026-01-23 23:56:27.908048463 +0000 UTC m=+10.130687692" Jan 23 23:56:33.468521 sudo[2343]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:33.550563 sshd[2340]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:33.559081 systemd[1]: sshd@6-172.31.20.253:22-4.153.228.146:34560.service: Deactivated successfully. Jan 23 23:56:33.566841 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:33.568873 systemd[1]: session-7.scope: Consumed 11.774s CPU time, 152.9M memory peak, 0B memory swap peak. Jan 23 23:56:33.573245 systemd-logind[2003]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:33.576204 systemd-logind[2003]: Removed session 7. Jan 23 23:56:53.536872 systemd[1]: Created slice kubepods-besteffort-pode234148b_8448_4247_a95a_c05877bc5b7f.slice - libcontainer container kubepods-besteffort-pode234148b_8448_4247_a95a_c05877bc5b7f.slice. Jan 23 23:56:53.567260 kubelet[3407]: I0123 23:56:53.566942 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnjj5\" (UniqueName: \"kubernetes.io/projected/e234148b-8448-4247-a95a-c05877bc5b7f-kube-api-access-pnjj5\") pod \"calico-typha-685dc9ffc5-z9wbw\" (UID: \"e234148b-8448-4247-a95a-c05877bc5b7f\") " pod="calico-system/calico-typha-685dc9ffc5-z9wbw" Jan 23 23:56:53.567260 kubelet[3407]: I0123 23:56:53.567101 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e234148b-8448-4247-a95a-c05877bc5b7f-typha-certs\") pod \"calico-typha-685dc9ffc5-z9wbw\" (UID: \"e234148b-8448-4247-a95a-c05877bc5b7f\") " pod="calico-system/calico-typha-685dc9ffc5-z9wbw" Jan 23 23:56:53.567260 kubelet[3407]: I0123 23:56:53.567159 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e234148b-8448-4247-a95a-c05877bc5b7f-tigera-ca-bundle\") pod \"calico-typha-685dc9ffc5-z9wbw\" (UID: \"e234148b-8448-4247-a95a-c05877bc5b7f\") " pod="calico-system/calico-typha-685dc9ffc5-z9wbw" Jan 23 23:56:53.837966 systemd[1]: Created slice kubepods-besteffort-poddd474f7b_1d25_4d91_930f_771cf18e28a7.slice - libcontainer container kubepods-besteffort-poddd474f7b_1d25_4d91_930f_771cf18e28a7.slice. Jan 23 23:56:53.845112 containerd[2029]: time="2026-01-23T23:56:53.844302316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-685dc9ffc5-z9wbw,Uid:e234148b-8448-4247-a95a-c05877bc5b7f,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:53.869838 kubelet[3407]: I0123 23:56:53.869760 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/dd474f7b-1d25-4d91-930f-771cf18e28a7-node-certs\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.869999 kubelet[3407]: I0123 23:56:53.869846 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-policysync\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.869999 kubelet[3407]: I0123 23:56:53.869888 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-flexvol-driver-host\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.869999 kubelet[3407]: I0123 23:56:53.869925 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-var-lib-calico\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.869999 kubelet[3407]: I0123 23:56:53.869962 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-cni-net-dir\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871550 kubelet[3407]: I0123 23:56:53.870001 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-lib-modules\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871550 kubelet[3407]: I0123 23:56:53.870035 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-cni-log-dir\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871550 kubelet[3407]: I0123 23:56:53.870072 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-xtables-lock\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871550 kubelet[3407]: I0123 23:56:53.870111 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-cni-bin-dir\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871550 kubelet[3407]: I0123 23:56:53.870148 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bgz6\" (UniqueName: \"kubernetes.io/projected/dd474f7b-1d25-4d91-930f-771cf18e28a7-kube-api-access-8bgz6\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871841 kubelet[3407]: I0123 23:56:53.870185 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dd474f7b-1d25-4d91-930f-771cf18e28a7-tigera-ca-bundle\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.871841 kubelet[3407]: I0123 23:56:53.870238 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/dd474f7b-1d25-4d91-930f-771cf18e28a7-var-run-calico\") pod \"calico-node-25822\" (UID: \"dd474f7b-1d25-4d91-930f-771cf18e28a7\") " pod="calico-system/calico-node-25822" Jan 23 23:56:53.902786 containerd[2029]: time="2026-01-23T23:56:53.902549104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:53.902786 containerd[2029]: time="2026-01-23T23:56:53.902669188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:53.902786 containerd[2029]: time="2026-01-23T23:56:53.902707444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:53.904056 containerd[2029]: time="2026-01-23T23:56:53.903919816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:53.953828 systemd[1]: Started cri-containerd-0c94cee73d387d4787d32477125ddd2bad2b1f027a3f40cb77cadb73ad936676.scope - libcontainer container 0c94cee73d387d4787d32477125ddd2bad2b1f027a3f40cb77cadb73ad936676. Jan 23 23:56:53.983004 kubelet[3407]: E0123 23:56:53.976427 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:56:53.985486 kubelet[3407]: E0123 23:56:53.984095 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:53.985732 kubelet[3407]: W0123 23:56:53.985693 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:53.985963 kubelet[3407]: E0123 23:56:53.985935 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.024197 kubelet[3407]: E0123 23:56:54.024111 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.024197 kubelet[3407]: W0123 23:56:54.024173 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.024406 kubelet[3407]: E0123 23:56:54.024209 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.050498 kubelet[3407]: E0123 23:56:54.050424 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.050900 kubelet[3407]: W0123 23:56:54.050663 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.050900 kubelet[3407]: E0123 23:56:54.050707 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.051643 kubelet[3407]: E0123 23:56:54.051352 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.051643 kubelet[3407]: W0123 23:56:54.051379 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.051643 kubelet[3407]: E0123 23:56:54.051478 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.052486 kubelet[3407]: E0123 23:56:54.052182 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.052926 kubelet[3407]: W0123 23:56:54.052649 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.052926 kubelet[3407]: E0123 23:56:54.052718 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.053349 kubelet[3407]: E0123 23:56:54.053305 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.053519 kubelet[3407]: W0123 23:56:54.053496 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.053766 kubelet[3407]: E0123 23:56:54.053738 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.055470 kubelet[3407]: E0123 23:56:54.055201 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.055470 kubelet[3407]: W0123 23:56:54.055238 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.055470 kubelet[3407]: E0123 23:56:54.055271 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.056632 kubelet[3407]: E0123 23:56:54.056598 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.056808 kubelet[3407]: W0123 23:56:54.056782 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.056973 kubelet[3407]: E0123 23:56:54.056946 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.058052 kubelet[3407]: E0123 23:56:54.057920 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.058052 kubelet[3407]: W0123 23:56:54.057978 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.058052 kubelet[3407]: E0123 23:56:54.058006 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.059549 kubelet[3407]: E0123 23:56:54.059164 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.059549 kubelet[3407]: W0123 23:56:54.059196 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.059549 kubelet[3407]: E0123 23:56:54.059227 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.060066 kubelet[3407]: E0123 23:56:54.059890 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.060066 kubelet[3407]: W0123 23:56:54.059915 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.060066 kubelet[3407]: E0123 23:56:54.059940 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.060641 kubelet[3407]: E0123 23:56:54.060422 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.060641 kubelet[3407]: W0123 23:56:54.060469 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.060641 kubelet[3407]: E0123 23:56:54.060497 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.061135 kubelet[3407]: E0123 23:56:54.061106 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.061472 kubelet[3407]: W0123 23:56:54.061246 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.061472 kubelet[3407]: E0123 23:56:54.061283 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.061883 kubelet[3407]: E0123 23:56:54.061856 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.062203 kubelet[3407]: W0123 23:56:54.061980 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.062203 kubelet[3407]: E0123 23:56:54.062015 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.062673 kubelet[3407]: E0123 23:56:54.062647 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.062934 kubelet[3407]: W0123 23:56:54.062822 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.062934 kubelet[3407]: E0123 23:56:54.062855 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.063875 kubelet[3407]: E0123 23:56:54.063636 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.063875 kubelet[3407]: W0123 23:56:54.063665 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.063875 kubelet[3407]: E0123 23:56:54.063695 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.064336 kubelet[3407]: E0123 23:56:54.064313 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.064693 kubelet[3407]: W0123 23:56:54.064416 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.064693 kubelet[3407]: E0123 23:56:54.064466 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.065810 kubelet[3407]: E0123 23:56:54.065772 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.066492 kubelet[3407]: W0123 23:56:54.065972 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.066492 kubelet[3407]: E0123 23:56:54.066027 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.067078 kubelet[3407]: E0123 23:56:54.067050 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.068289 kubelet[3407]: W0123 23:56:54.067180 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.068518 kubelet[3407]: E0123 23:56:54.068486 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.069137 kubelet[3407]: E0123 23:56:54.069091 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.069287 kubelet[3407]: W0123 23:56:54.069261 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.069400 kubelet[3407]: E0123 23:56:54.069376 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.070036 kubelet[3407]: E0123 23:56:54.070010 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.070298 kubelet[3407]: W0123 23:56:54.070128 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.070298 kubelet[3407]: E0123 23:56:54.070160 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.070889 kubelet[3407]: E0123 23:56:54.070846 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.071562 kubelet[3407]: W0123 23:56:54.071007 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.071562 kubelet[3407]: E0123 23:56:54.071060 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.074059 kubelet[3407]: E0123 23:56:54.074007 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.074343 kubelet[3407]: W0123 23:56:54.074181 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.074590 kubelet[3407]: E0123 23:56:54.074219 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.074720 kubelet[3407]: I0123 23:56:54.074549 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/116c2572-ef7b-49fd-a16b-25d6e19f65b8-kubelet-dir\") pod \"csi-node-driver-wc9cs\" (UID: \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\") " pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:56:54.075483 kubelet[3407]: E0123 23:56:54.075422 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.075610 kubelet[3407]: W0123 23:56:54.075532 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.075610 kubelet[3407]: E0123 23:56:54.075597 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.077660 kubelet[3407]: E0123 23:56:54.077617 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.077660 kubelet[3407]: W0123 23:56:54.077654 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.078289 kubelet[3407]: E0123 23:56:54.077734 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.079001 kubelet[3407]: E0123 23:56:54.078964 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.079278 kubelet[3407]: W0123 23:56:54.078999 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.079278 kubelet[3407]: E0123 23:56:54.079032 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.079278 kubelet[3407]: I0123 23:56:54.079088 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/116c2572-ef7b-49fd-a16b-25d6e19f65b8-varrun\") pod \"csi-node-driver-wc9cs\" (UID: \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\") " pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:56:54.080662 kubelet[3407]: E0123 23:56:54.080620 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.080662 kubelet[3407]: W0123 23:56:54.080661 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.080829 kubelet[3407]: E0123 23:56:54.080711 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.080829 kubelet[3407]: I0123 23:56:54.080763 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/116c2572-ef7b-49fd-a16b-25d6e19f65b8-socket-dir\") pod \"csi-node-driver-wc9cs\" (UID: \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\") " pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:56:54.083325 kubelet[3407]: E0123 23:56:54.083279 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.083325 kubelet[3407]: W0123 23:56:54.083321 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.083810 kubelet[3407]: E0123 23:56:54.083521 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.083810 kubelet[3407]: I0123 23:56:54.083702 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/116c2572-ef7b-49fd-a16b-25d6e19f65b8-registration-dir\") pod \"csi-node-driver-wc9cs\" (UID: \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\") " pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:56:54.084727 kubelet[3407]: E0123 23:56:54.084687 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.085053 kubelet[3407]: W0123 23:56:54.084731 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.085053 kubelet[3407]: E0123 23:56:54.084909 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.085805 kubelet[3407]: E0123 23:56:54.085761 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.085805 kubelet[3407]: W0123 23:56:54.085796 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.086108 kubelet[3407]: E0123 23:56:54.085952 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.087730 kubelet[3407]: E0123 23:56:54.087678 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.087730 kubelet[3407]: W0123 23:56:54.087717 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.088651 kubelet[3407]: E0123 23:56:54.088339 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.088651 kubelet[3407]: I0123 23:56:54.088400 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29xwt\" (UniqueName: \"kubernetes.io/projected/116c2572-ef7b-49fd-a16b-25d6e19f65b8-kube-api-access-29xwt\") pod \"csi-node-driver-wc9cs\" (UID: \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\") " pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:56:54.089688 kubelet[3407]: E0123 23:56:54.089527 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.089688 kubelet[3407]: W0123 23:56:54.089567 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.089688 kubelet[3407]: E0123 23:56:54.089734 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.090692 kubelet[3407]: E0123 23:56:54.090017 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.090692 kubelet[3407]: W0123 23:56:54.090047 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.090692 kubelet[3407]: E0123 23:56:54.090076 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.092489 kubelet[3407]: E0123 23:56:54.091395 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.092489 kubelet[3407]: W0123 23:56:54.091430 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.092489 kubelet[3407]: E0123 23:56:54.091840 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.092773 kubelet[3407]: E0123 23:56:54.092731 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.092773 kubelet[3407]: W0123 23:56:54.092763 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.092979 kubelet[3407]: E0123 23:56:54.092792 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.094034 kubelet[3407]: E0123 23:56:54.093967 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.094034 kubelet[3407]: W0123 23:56:54.094020 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.094236 kubelet[3407]: E0123 23:56:54.094055 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.095843 kubelet[3407]: E0123 23:56:54.095791 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.095843 kubelet[3407]: W0123 23:56:54.095831 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.096037 kubelet[3407]: E0123 23:56:54.095865 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.147968 containerd[2029]: time="2026-01-23T23:56:54.147431029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-25822,Uid:dd474f7b-1d25-4d91-930f-771cf18e28a7,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:54.195755 containerd[2029]: time="2026-01-23T23:56:54.195679777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-685dc9ffc5-z9wbw,Uid:e234148b-8448-4247-a95a-c05877bc5b7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"0c94cee73d387d4787d32477125ddd2bad2b1f027a3f40cb77cadb73ad936676\"" Jan 23 23:56:54.200614 kubelet[3407]: E0123 23:56:54.200549 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.200614 kubelet[3407]: W0123 23:56:54.200592 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.201205 kubelet[3407]: E0123 23:56:54.200629 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.201982 containerd[2029]: time="2026-01-23T23:56:54.201726493Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:56:54.202681 kubelet[3407]: E0123 23:56:54.202629 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.202681 kubelet[3407]: W0123 23:56:54.202667 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.202935 kubelet[3407]: E0123 23:56:54.202704 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.205216 kubelet[3407]: E0123 23:56:54.204838 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.205216 kubelet[3407]: W0123 23:56:54.205003 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.205589 kubelet[3407]: E0123 23:56:54.205142 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.208722 kubelet[3407]: E0123 23:56:54.208662 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.208722 kubelet[3407]: W0123 23:56:54.208705 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.208722 kubelet[3407]: E0123 23:56:54.208773 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.210303 kubelet[3407]: E0123 23:56:54.210245 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.210303 kubelet[3407]: W0123 23:56:54.210286 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.210593 kubelet[3407]: E0123 23:56:54.210540 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.211101 kubelet[3407]: E0123 23:56:54.211058 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.211101 kubelet[3407]: W0123 23:56:54.211092 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.211341 kubelet[3407]: E0123 23:56:54.211310 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.212873 kubelet[3407]: E0123 23:56:54.212801 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.212873 kubelet[3407]: W0123 23:56:54.212843 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.213211 kubelet[3407]: E0123 23:56:54.213147 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.213998 kubelet[3407]: E0123 23:56:54.213959 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.213998 kubelet[3407]: W0123 23:56:54.213994 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.216980 kubelet[3407]: E0123 23:56:54.215760 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.216980 kubelet[3407]: E0123 23:56:54.216609 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.216980 kubelet[3407]: W0123 23:56:54.216639 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.216980 kubelet[3407]: E0123 23:56:54.216895 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.218715 kubelet[3407]: E0123 23:56:54.218572 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.219473 kubelet[3407]: W0123 23:56:54.218928 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.219473 kubelet[3407]: E0123 23:56:54.219047 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.222058 kubelet[3407]: E0123 23:56:54.221680 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.222058 kubelet[3407]: W0123 23:56:54.221717 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.222058 kubelet[3407]: E0123 23:56:54.221910 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.223652 kubelet[3407]: E0123 23:56:54.223616 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.223929 kubelet[3407]: W0123 23:56:54.223826 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.224001 kubelet[3407]: E0123 23:56:54.223929 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.225073 kubelet[3407]: E0123 23:56:54.224741 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.225073 kubelet[3407]: W0123 23:56:54.224774 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.225073 kubelet[3407]: E0123 23:56:54.224880 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.227152 kubelet[3407]: E0123 23:56:54.226809 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.227152 kubelet[3407]: W0123 23:56:54.226847 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.229235 kubelet[3407]: E0123 23:56:54.228627 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.229235 kubelet[3407]: E0123 23:56:54.228824 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.229235 kubelet[3407]: W0123 23:56:54.228842 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.230946 kubelet[3407]: E0123 23:56:54.230579 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.230946 kubelet[3407]: W0123 23:56:54.230609 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.231316 kubelet[3407]: E0123 23:56:54.231289 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.231501 kubelet[3407]: W0123 23:56:54.231399 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.231936 kubelet[3407]: E0123 23:56:54.231909 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.232050 kubelet[3407]: W0123 23:56:54.232027 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.233601 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.234084 kubelet[3407]: W0123 23:56:54.233639 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.233673 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.233725 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.233782 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.233984 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.234084 kubelet[3407]: E0123 23:56:54.234011 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.234842 kubelet[3407]: E0123 23:56:54.234811 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.236086 kubelet[3407]: W0123 23:56:54.234956 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.236593 kubelet[3407]: E0123 23:56:54.236560 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.237116 kubelet[3407]: E0123 23:56:54.237092 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.237465 kubelet[3407]: W0123 23:56:54.237297 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.237465 kubelet[3407]: E0123 23:56:54.237342 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.238718 kubelet[3407]: E0123 23:56:54.238357 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.238718 kubelet[3407]: W0123 23:56:54.238385 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.238718 kubelet[3407]: E0123 23:56:54.238424 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.238990 kubelet[3407]: E0123 23:56:54.238804 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.238990 kubelet[3407]: W0123 23:56:54.238826 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.238990 kubelet[3407]: E0123 23:56:54.238864 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.239787 kubelet[3407]: E0123 23:56:54.239212 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.239787 kubelet[3407]: W0123 23:56:54.239241 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.239787 kubelet[3407]: E0123 23:56:54.239264 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.241472 kubelet[3407]: E0123 23:56:54.241108 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.241472 kubelet[3407]: W0123 23:56:54.241160 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.241472 kubelet[3407]: E0123 23:56:54.241194 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.262283 kubelet[3407]: E0123 23:56:54.262009 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.262283 kubelet[3407]: W0123 23:56:54.262088 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.262283 kubelet[3407]: E0123 23:56:54.262159 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.273215 containerd[2029]: time="2026-01-23T23:56:54.272648306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:54.273766 containerd[2029]: time="2026-01-23T23:56:54.273183638Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:54.274568 containerd[2029]: time="2026-01-23T23:56:54.273568466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.275657 containerd[2029]: time="2026-01-23T23:56:54.275254082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:54.312804 systemd[1]: Started cri-containerd-1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc.scope - libcontainer container 1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc. Jan 23 23:56:54.359872 containerd[2029]: time="2026-01-23T23:56:54.359625626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-25822,Uid:dd474f7b-1d25-4d91-930f-771cf18e28a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\"" Jan 23 23:56:55.436974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3413282344.mount: Deactivated successfully. Jan 23 23:56:56.037940 kubelet[3407]: E0123 23:56:56.037778 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:56:56.256966 containerd[2029]: time="2026-01-23T23:56:56.255557020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:56.256966 containerd[2029]: time="2026-01-23T23:56:56.256913476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:56:56.257933 containerd[2029]: time="2026-01-23T23:56:56.257853088Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:56.262333 containerd[2029]: time="2026-01-23T23:56:56.261681364Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:56.263281 containerd[2029]: time="2026-01-23T23:56:56.263221348Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.061406295s" Jan 23 23:56:56.263360 containerd[2029]: time="2026-01-23T23:56:56.263279020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:56:56.269832 containerd[2029]: time="2026-01-23T23:56:56.269769832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:56:56.296029 containerd[2029]: time="2026-01-23T23:56:56.295561480Z" level=info msg="CreateContainer within sandbox \"0c94cee73d387d4787d32477125ddd2bad2b1f027a3f40cb77cadb73ad936676\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:56:56.321991 containerd[2029]: time="2026-01-23T23:56:56.319490296Z" level=info msg="CreateContainer within sandbox \"0c94cee73d387d4787d32477125ddd2bad2b1f027a3f40cb77cadb73ad936676\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"efe4b10367c4cc0c83e0f4d7d65f60f0fcbb73310edfd7d386c4fdf754e4d5b7\"" Jan 23 23:56:56.322732 containerd[2029]: time="2026-01-23T23:56:56.322682272Z" level=info msg="StartContainer for \"efe4b10367c4cc0c83e0f4d7d65f60f0fcbb73310edfd7d386c4fdf754e4d5b7\"" Jan 23 23:56:56.375791 systemd[1]: Started cri-containerd-efe4b10367c4cc0c83e0f4d7d65f60f0fcbb73310edfd7d386c4fdf754e4d5b7.scope - libcontainer container efe4b10367c4cc0c83e0f4d7d65f60f0fcbb73310edfd7d386c4fdf754e4d5b7. Jan 23 23:56:56.448870 containerd[2029]: time="2026-01-23T23:56:56.448794016Z" level=info msg="StartContainer for \"efe4b10367c4cc0c83e0f4d7d65f60f0fcbb73310edfd7d386c4fdf754e4d5b7\" returns successfully" Jan 23 23:56:57.294157 kubelet[3407]: E0123 23:56:57.293622 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.296319 kubelet[3407]: W0123 23:56:57.293666 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.296319 kubelet[3407]: E0123 23:56:57.294617 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.296319 kubelet[3407]: E0123 23:56:57.295207 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.296319 kubelet[3407]: W0123 23:56:57.295230 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.296319 kubelet[3407]: E0123 23:56:57.295328 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.296319 kubelet[3407]: E0123 23:56:57.296265 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.298057 kubelet[3407]: W0123 23:56:57.296409 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.298057 kubelet[3407]: E0123 23:56:57.296885 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.299577 kubelet[3407]: E0123 23:56:57.299028 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.299577 kubelet[3407]: W0123 23:56:57.299059 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.299577 kubelet[3407]: E0123 23:56:57.299113 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.303812 kubelet[3407]: E0123 23:56:57.303370 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.303812 kubelet[3407]: W0123 23:56:57.303585 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.303812 kubelet[3407]: E0123 23:56:57.303619 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.306013 kubelet[3407]: E0123 23:56:57.305979 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.306148 kubelet[3407]: W0123 23:56:57.306123 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.306289 kubelet[3407]: E0123 23:56:57.306262 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.308570 kubelet[3407]: E0123 23:56:57.307886 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.308570 kubelet[3407]: W0123 23:56:57.307922 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.308570 kubelet[3407]: E0123 23:56:57.308075 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.310439 kubelet[3407]: E0123 23:56:57.310252 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.311020 kubelet[3407]: W0123 23:56:57.310726 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.311020 kubelet[3407]: E0123 23:56:57.310770 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.312600 kubelet[3407]: E0123 23:56:57.312235 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.312600 kubelet[3407]: W0123 23:56:57.312268 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.312600 kubelet[3407]: E0123 23:56:57.312300 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.315237 kubelet[3407]: E0123 23:56:57.314787 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.315237 kubelet[3407]: W0123 23:56:57.314824 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.315237 kubelet[3407]: E0123 23:56:57.314856 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.316785 kubelet[3407]: E0123 23:56:57.316741 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.317050 kubelet[3407]: W0123 23:56:57.316907 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.317171 kubelet[3407]: E0123 23:56:57.317086 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.319705 kubelet[3407]: E0123 23:56:57.319370 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.319705 kubelet[3407]: W0123 23:56:57.319419 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.319705 kubelet[3407]: E0123 23:56:57.319498 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.321929 kubelet[3407]: E0123 23:56:57.320806 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.321929 kubelet[3407]: W0123 23:56:57.320863 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.321929 kubelet[3407]: E0123 23:56:57.320899 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.323623 kubelet[3407]: E0123 23:56:57.323363 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.323836 kubelet[3407]: W0123 23:56:57.323803 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.324231 kubelet[3407]: E0123 23:56:57.324029 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.325565 kubelet[3407]: E0123 23:56:57.325347 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.325565 kubelet[3407]: W0123 23:56:57.325385 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.325565 kubelet[3407]: E0123 23:56:57.325417 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.327141 kubelet[3407]: I0123 23:56:57.326829 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-685dc9ffc5-z9wbw" podStartSLOduration=2.262835486 podStartE2EDuration="4.326807753s" podCreationTimestamp="2026-01-23 23:56:53 +0000 UTC" firstStartedPulling="2026-01-23 23:56:54.201014365 +0000 UTC m=+36.423653594" lastFinishedPulling="2026-01-23 23:56:56.264986644 +0000 UTC m=+38.487625861" observedRunningTime="2026-01-23 23:56:57.322972409 +0000 UTC m=+39.545611662" watchObservedRunningTime="2026-01-23 23:56:57.326807753 +0000 UTC m=+39.549446982" Jan 23 23:56:57.343049 kubelet[3407]: E0123 23:56:57.342316 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.343049 kubelet[3407]: W0123 23:56:57.342358 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.343049 kubelet[3407]: E0123 23:56:57.342391 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.343049 kubelet[3407]: E0123 23:56:57.342810 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.343049 kubelet[3407]: W0123 23:56:57.342845 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.343049 kubelet[3407]: E0123 23:56:57.342869 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.344501 kubelet[3407]: E0123 23:56:57.343981 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.344904 kubelet[3407]: W0123 23:56:57.344019 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.344979 kubelet[3407]: E0123 23:56:57.344927 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.347596 kubelet[3407]: E0123 23:56:57.347535 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.347596 kubelet[3407]: W0123 23:56:57.347582 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.348059 kubelet[3407]: E0123 23:56:57.347837 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.348407 kubelet[3407]: E0123 23:56:57.348366 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.348407 kubelet[3407]: W0123 23:56:57.348397 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.348987 kubelet[3407]: E0123 23:56:57.348939 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.350545 kubelet[3407]: E0123 23:56:57.350498 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.350696 kubelet[3407]: W0123 23:56:57.350537 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.351007 kubelet[3407]: E0123 23:56:57.350971 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.351635 kubelet[3407]: E0123 23:56:57.351602 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.351754 kubelet[3407]: W0123 23:56:57.351633 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.351811 kubelet[3407]: E0123 23:56:57.351766 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.352130 kubelet[3407]: E0123 23:56:57.352103 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.352196 kubelet[3407]: W0123 23:56:57.352129 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.352431 kubelet[3407]: E0123 23:56:57.352326 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.352637 kubelet[3407]: E0123 23:56:57.352603 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.352746 kubelet[3407]: W0123 23:56:57.352641 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.352890 kubelet[3407]: E0123 23:56:57.352850 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.353197 kubelet[3407]: E0123 23:56:57.353170 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.353262 kubelet[3407]: W0123 23:56:57.353196 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.353262 kubelet[3407]: E0123 23:56:57.353228 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.353697 kubelet[3407]: E0123 23:56:57.353668 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.356519 kubelet[3407]: W0123 23:56:57.353696 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.356519 kubelet[3407]: E0123 23:56:57.353907 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.357601 kubelet[3407]: E0123 23:56:57.357550 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.357601 kubelet[3407]: W0123 23:56:57.357588 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.358243 kubelet[3407]: E0123 23:56:57.357870 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.358243 kubelet[3407]: E0123 23:56:57.357977 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.358243 kubelet[3407]: W0123 23:56:57.357994 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.358243 kubelet[3407]: E0123 23:56:57.358109 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.359016 kubelet[3407]: E0123 23:56:57.358415 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.359016 kubelet[3407]: W0123 23:56:57.358434 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.359016 kubelet[3407]: E0123 23:56:57.358512 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.362160 kubelet[3407]: E0123 23:56:57.362116 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.364515 kubelet[3407]: W0123 23:56:57.363659 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.364515 kubelet[3407]: E0123 23:56:57.363730 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.366378 kubelet[3407]: E0123 23:56:57.365883 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.366378 kubelet[3407]: W0123 23:56:57.365968 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.366378 kubelet[3407]: E0123 23:56:57.366150 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.368867 kubelet[3407]: E0123 23:56:57.367608 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.368867 kubelet[3407]: W0123 23:56:57.367648 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.368867 kubelet[3407]: E0123 23:56:57.367682 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.371493 kubelet[3407]: E0123 23:56:57.369597 3407 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:57.371708 kubelet[3407]: W0123 23:56:57.371665 3407 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:57.371862 kubelet[3407]: E0123 23:56:57.371800 3407 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:57.517355 containerd[2029]: time="2026-01-23T23:56:57.517295790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:57.520562 containerd[2029]: time="2026-01-23T23:56:57.520443642Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:56:57.521149 containerd[2029]: time="2026-01-23T23:56:57.521061234Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:57.528526 containerd[2029]: time="2026-01-23T23:56:57.528371586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:57.530783 containerd[2029]: time="2026-01-23T23:56:57.529621038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.259186262s" Jan 23 23:56:57.530783 containerd[2029]: time="2026-01-23T23:56:57.529684002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:56:57.533858 containerd[2029]: time="2026-01-23T23:56:57.533641926Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:56:57.567268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2760449572.mount: Deactivated successfully. Jan 23 23:56:57.570426 containerd[2029]: time="2026-01-23T23:56:57.570193158Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7\"" Jan 23 23:56:57.575084 containerd[2029]: time="2026-01-23T23:56:57.575032482Z" level=info msg="StartContainer for \"54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7\"" Jan 23 23:56:57.629893 systemd[1]: Started cri-containerd-54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7.scope - libcontainer container 54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7. Jan 23 23:56:57.689614 containerd[2029]: time="2026-01-23T23:56:57.689280391Z" level=info msg="StartContainer for \"54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7\" returns successfully" Jan 23 23:56:57.721987 systemd[1]: cri-containerd-54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7.scope: Deactivated successfully. Jan 23 23:56:58.039102 kubelet[3407]: E0123 23:56:58.037749 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:56:58.119525 containerd[2029]: time="2026-01-23T23:56:58.119250329Z" level=info msg="shim disconnected" id=54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7 namespace=k8s.io Jan 23 23:56:58.119525 containerd[2029]: time="2026-01-23T23:56:58.119321861Z" level=warning msg="cleaning up after shim disconnected" id=54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7 namespace=k8s.io Jan 23 23:56:58.119525 containerd[2029]: time="2026-01-23T23:56:58.119345285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:58.275015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54a1eb5b57868e0ee9774c190574f3d15f0105bbf36f5323f5a0b8db8bd53bd7-rootfs.mount: Deactivated successfully. Jan 23 23:56:58.298197 containerd[2029]: time="2026-01-23T23:56:58.297368550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:57:00.047395 kubelet[3407]: E0123 23:57:00.046973 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:01.090108 containerd[2029]: time="2026-01-23T23:57:01.090053696Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:01.092268 containerd[2029]: time="2026-01-23T23:57:01.092203124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:57:01.092857 containerd[2029]: time="2026-01-23T23:57:01.092774252Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:01.096857 containerd[2029]: time="2026-01-23T23:57:01.096791000Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:01.098697 containerd[2029]: time="2026-01-23T23:57:01.098528492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.801099366s" Jan 23 23:57:01.098697 containerd[2029]: time="2026-01-23T23:57:01.098581892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:57:01.104015 containerd[2029]: time="2026-01-23T23:57:01.103824452Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:57:01.135705 containerd[2029]: time="2026-01-23T23:57:01.132940664Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab\"" Jan 23 23:57:01.135439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3877013824.mount: Deactivated successfully. Jan 23 23:57:01.145061 containerd[2029]: time="2026-01-23T23:57:01.143677208Z" level=info msg="StartContainer for \"72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab\"" Jan 23 23:57:01.213809 systemd[1]: Started cri-containerd-72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab.scope - libcontainer container 72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab. Jan 23 23:57:01.270330 containerd[2029]: time="2026-01-23T23:57:01.270026144Z" level=info msg="StartContainer for \"72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab\" returns successfully" Jan 23 23:57:02.040737 kubelet[3407]: E0123 23:57:02.040672 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:02.467319 containerd[2029]: time="2026-01-23T23:57:02.467257330Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:57:02.471908 systemd[1]: cri-containerd-72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab.scope: Deactivated successfully. Jan 23 23:57:02.513603 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab-rootfs.mount: Deactivated successfully. Jan 23 23:57:02.519293 kubelet[3407]: I0123 23:57:02.519246 3407 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:57:02.604551 systemd[1]: Created slice kubepods-burstable-podd79f639b_89ce_4a3e_898f_c563a6cc1a21.slice - libcontainer container kubepods-burstable-podd79f639b_89ce_4a3e_898f_c563a6cc1a21.slice. Jan 23 23:57:02.660168 systemd[1]: Created slice kubepods-besteffort-pod471e63ba_4009_4390_becb_d3cf35fc95c6.slice - libcontainer container kubepods-besteffort-pod471e63ba_4009_4390_becb_d3cf35fc95c6.slice. Jan 23 23:57:02.684710 kubelet[3407]: I0123 23:57:02.684650 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50347932-36ba-4e98-91df-99f4d941be59-whisker-backend-key-pair\") pod \"whisker-757987fb54-4nxp8\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " pod="calico-system/whisker-757987fb54-4nxp8" Jan 23 23:57:02.685035 kubelet[3407]: I0123 23:57:02.685005 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/471e63ba-4009-4390-becb-d3cf35fc95c6-tigera-ca-bundle\") pod \"calico-kube-controllers-5f867bfb44-djs5n\" (UID: \"471e63ba-4009-4390-becb-d3cf35fc95c6\") " pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" Jan 23 23:57:02.685200 kubelet[3407]: I0123 23:57:02.685173 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzbwp\" (UniqueName: \"kubernetes.io/projected/471e63ba-4009-4390-becb-d3cf35fc95c6-kube-api-access-hzbwp\") pod \"calico-kube-controllers-5f867bfb44-djs5n\" (UID: \"471e63ba-4009-4390-becb-d3cf35fc95c6\") " pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" Jan 23 23:57:02.685347 kubelet[3407]: I0123 23:57:02.685323 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6q6pw\" (UniqueName: \"kubernetes.io/projected/532cc4d2-2f64-4521-88b0-26ef20fbd1cc-kube-api-access-6q6pw\") pod \"calico-apiserver-7fdd48bcf6-xgcqc\" (UID: \"532cc4d2-2f64-4521-88b0-26ef20fbd1cc\") " pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" Jan 23 23:57:02.685532 kubelet[3407]: I0123 23:57:02.685505 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6pb9\" (UniqueName: \"kubernetes.io/projected/761e0c97-a113-4485-8707-6df97f1eaf68-kube-api-access-l6pb9\") pod \"coredns-668d6bf9bc-d4dkx\" (UID: \"761e0c97-a113-4485-8707-6df97f1eaf68\") " pod="kube-system/coredns-668d6bf9bc-d4dkx" Jan 23 23:57:02.685669 kubelet[3407]: I0123 23:57:02.685645 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/046ae13d-0e4a-437d-9371-4ba65edfa713-config\") pod \"goldmane-666569f655-mmfhn\" (UID: \"046ae13d-0e4a-437d-9371-4ba65edfa713\") " pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:02.685802 kubelet[3407]: I0123 23:57:02.685779 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d79f639b-89ce-4a3e-898f-c563a6cc1a21-config-volume\") pod \"coredns-668d6bf9bc-xljwn\" (UID: \"d79f639b-89ce-4a3e-898f-c563a6cc1a21\") " pod="kube-system/coredns-668d6bf9bc-xljwn" Jan 23 23:57:02.686906 kubelet[3407]: I0123 23:57:02.685913 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50347932-36ba-4e98-91df-99f4d941be59-whisker-ca-bundle\") pod \"whisker-757987fb54-4nxp8\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " pod="calico-system/whisker-757987fb54-4nxp8" Jan 23 23:57:02.686906 kubelet[3407]: I0123 23:57:02.685979 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clm5g\" (UniqueName: \"kubernetes.io/projected/046ae13d-0e4a-437d-9371-4ba65edfa713-kube-api-access-clm5g\") pod \"goldmane-666569f655-mmfhn\" (UID: \"046ae13d-0e4a-437d-9371-4ba65edfa713\") " pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:02.686906 kubelet[3407]: I0123 23:57:02.686020 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv46h\" (UniqueName: \"kubernetes.io/projected/d79f639b-89ce-4a3e-898f-c563a6cc1a21-kube-api-access-xv46h\") pod \"coredns-668d6bf9bc-xljwn\" (UID: \"d79f639b-89ce-4a3e-898f-c563a6cc1a21\") " pod="kube-system/coredns-668d6bf9bc-xljwn" Jan 23 23:57:02.686906 kubelet[3407]: I0123 23:57:02.686063 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/532cc4d2-2f64-4521-88b0-26ef20fbd1cc-calico-apiserver-certs\") pod \"calico-apiserver-7fdd48bcf6-xgcqc\" (UID: \"532cc4d2-2f64-4521-88b0-26ef20fbd1cc\") " pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" Jan 23 23:57:02.686906 kubelet[3407]: I0123 23:57:02.686103 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/046ae13d-0e4a-437d-9371-4ba65edfa713-goldmane-ca-bundle\") pod \"goldmane-666569f655-mmfhn\" (UID: \"046ae13d-0e4a-437d-9371-4ba65edfa713\") " pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:02.687272 kubelet[3407]: I0123 23:57:02.686142 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/761e0c97-a113-4485-8707-6df97f1eaf68-config-volume\") pod \"coredns-668d6bf9bc-d4dkx\" (UID: \"761e0c97-a113-4485-8707-6df97f1eaf68\") " pod="kube-system/coredns-668d6bf9bc-d4dkx" Jan 23 23:57:02.687272 kubelet[3407]: I0123 23:57:02.686230 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b545h\" (UniqueName: \"kubernetes.io/projected/17163695-eef5-4bf6-be5b-0d305316c85b-kube-api-access-b545h\") pod \"calico-apiserver-7fdd48bcf6-n88dg\" (UID: \"17163695-eef5-4bf6-be5b-0d305316c85b\") " pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" Jan 23 23:57:02.687272 kubelet[3407]: I0123 23:57:02.686275 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/046ae13d-0e4a-437d-9371-4ba65edfa713-goldmane-key-pair\") pod \"goldmane-666569f655-mmfhn\" (UID: \"046ae13d-0e4a-437d-9371-4ba65edfa713\") " pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:02.687272 kubelet[3407]: I0123 23:57:02.686315 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/17163695-eef5-4bf6-be5b-0d305316c85b-calico-apiserver-certs\") pod \"calico-apiserver-7fdd48bcf6-n88dg\" (UID: \"17163695-eef5-4bf6-be5b-0d305316c85b\") " pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" Jan 23 23:57:02.687272 kubelet[3407]: I0123 23:57:02.686352 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9plzf\" (UniqueName: \"kubernetes.io/projected/50347932-36ba-4e98-91df-99f4d941be59-kube-api-access-9plzf\") pod \"whisker-757987fb54-4nxp8\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " pod="calico-system/whisker-757987fb54-4nxp8" Jan 23 23:57:02.703272 systemd[1]: Created slice kubepods-burstable-pod761e0c97_a113_4485_8707_6df97f1eaf68.slice - libcontainer container kubepods-burstable-pod761e0c97_a113_4485_8707_6df97f1eaf68.slice. Jan 23 23:57:02.722800 systemd[1]: Created slice kubepods-besteffort-pod50347932_36ba_4e98_91df_99f4d941be59.slice - libcontainer container kubepods-besteffort-pod50347932_36ba_4e98_91df_99f4d941be59.slice. Jan 23 23:57:02.745398 systemd[1]: Created slice kubepods-besteffort-pod532cc4d2_2f64_4521_88b0_26ef20fbd1cc.slice - libcontainer container kubepods-besteffort-pod532cc4d2_2f64_4521_88b0_26ef20fbd1cc.slice. Jan 23 23:57:02.765742 systemd[1]: Created slice kubepods-besteffort-pod17163695_eef5_4bf6_be5b_0d305316c85b.slice - libcontainer container kubepods-besteffort-pod17163695_eef5_4bf6_be5b_0d305316c85b.slice. Jan 23 23:57:02.784623 systemd[1]: Created slice kubepods-besteffort-pod046ae13d_0e4a_437d_9371_4ba65edfa713.slice - libcontainer container kubepods-besteffort-pod046ae13d_0e4a_437d_9371_4ba65edfa713.slice. Jan 23 23:57:02.912669 containerd[2029]: time="2026-01-23T23:57:02.912418573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xljwn,Uid:d79f639b-89ce-4a3e-898f-c563a6cc1a21,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:02.932002 containerd[2029]: time="2026-01-23T23:57:02.930421585Z" level=info msg="shim disconnected" id=72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab namespace=k8s.io Jan 23 23:57:02.932002 containerd[2029]: time="2026-01-23T23:57:02.931772689Z" level=warning msg="cleaning up after shim disconnected" id=72e4d5ecf3a74e43ec997af492ec4052583ea3f19df524af6cc442934f12b6ab namespace=k8s.io Jan 23 23:57:02.932002 containerd[2029]: time="2026-01-23T23:57:02.931991269Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:02.977675 containerd[2029]: time="2026-01-23T23:57:02.976679257Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f867bfb44-djs5n,Uid:471e63ba-4009-4390-becb-d3cf35fc95c6,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:02.993246 containerd[2029]: time="2026-01-23T23:57:02.992557021Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:57:02Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:57:03.015562 containerd[2029]: time="2026-01-23T23:57:03.015358101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4dkx,Uid:761e0c97-a113-4485-8707-6df97f1eaf68,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:03.041650 containerd[2029]: time="2026-01-23T23:57:03.041236293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757987fb54-4nxp8,Uid:50347932-36ba-4e98-91df-99f4d941be59,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:03.063127 containerd[2029]: time="2026-01-23T23:57:03.063045369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-xgcqc,Uid:532cc4d2-2f64-4521-88b0-26ef20fbd1cc,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:57:03.079000 containerd[2029]: time="2026-01-23T23:57:03.078941997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-n88dg,Uid:17163695-eef5-4bf6-be5b-0d305316c85b,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:57:03.099076 containerd[2029]: time="2026-01-23T23:57:03.098817753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mmfhn,Uid:046ae13d-0e4a-437d-9371-4ba65edfa713,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:03.335502 containerd[2029]: time="2026-01-23T23:57:03.335415959Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:57:03.377811 containerd[2029]: time="2026-01-23T23:57:03.377720315Z" level=error msg="Failed to destroy network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.381364 containerd[2029]: time="2026-01-23T23:57:03.381296879Z" level=error msg="encountered an error cleaning up failed sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.383121 containerd[2029]: time="2026-01-23T23:57:03.383055155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f867bfb44-djs5n,Uid:471e63ba-4009-4390-becb-d3cf35fc95c6,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.388360 kubelet[3407]: E0123 23:57:03.384966 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.390163 kubelet[3407]: E0123 23:57:03.388407 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" Jan 23 23:57:03.390163 kubelet[3407]: E0123 23:57:03.388475 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" Jan 23 23:57:03.390163 kubelet[3407]: E0123 23:57:03.388556 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:03.468935 containerd[2029]: time="2026-01-23T23:57:03.468861959Z" level=error msg="Failed to destroy network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.472293 containerd[2029]: time="2026-01-23T23:57:03.472210103Z" level=error msg="encountered an error cleaning up failed sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.472491 containerd[2029]: time="2026-01-23T23:57:03.472320611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xljwn,Uid:d79f639b-89ce-4a3e-898f-c563a6cc1a21,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.473707 kubelet[3407]: E0123 23:57:03.473619 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.473908 kubelet[3407]: E0123 23:57:03.473727 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xljwn" Jan 23 23:57:03.473908 kubelet[3407]: E0123 23:57:03.473762 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xljwn" Jan 23 23:57:03.473908 kubelet[3407]: E0123 23:57:03.473829 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xljwn_kube-system(d79f639b-89ce-4a3e-898f-c563a6cc1a21)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xljwn_kube-system(d79f639b-89ce-4a3e-898f-c563a6cc1a21)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xljwn" podUID="d79f639b-89ce-4a3e-898f-c563a6cc1a21" Jan 23 23:57:03.489373 containerd[2029]: time="2026-01-23T23:57:03.489310427Z" level=error msg="Failed to destroy network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.490351 containerd[2029]: time="2026-01-23T23:57:03.490269395Z" level=error msg="encountered an error cleaning up failed sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.491128 containerd[2029]: time="2026-01-23T23:57:03.491072603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4dkx,Uid:761e0c97-a113-4485-8707-6df97f1eaf68,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.492878 kubelet[3407]: E0123 23:57:03.492774 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.493061 kubelet[3407]: E0123 23:57:03.492895 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4dkx" Jan 23 23:57:03.493061 kubelet[3407]: E0123 23:57:03.492932 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-d4dkx" Jan 23 23:57:03.494508 kubelet[3407]: E0123 23:57:03.493023 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-d4dkx_kube-system(761e0c97-a113-4485-8707-6df97f1eaf68)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-d4dkx_kube-system(761e0c97-a113-4485-8707-6df97f1eaf68)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d4dkx" podUID="761e0c97-a113-4485-8707-6df97f1eaf68" Jan 23 23:57:03.562595 containerd[2029]: time="2026-01-23T23:57:03.562520400Z" level=error msg="Failed to destroy network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.566608 containerd[2029]: time="2026-01-23T23:57:03.566238396Z" level=error msg="encountered an error cleaning up failed sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.567242 containerd[2029]: time="2026-01-23T23:57:03.566839740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-n88dg,Uid:17163695-eef5-4bf6-be5b-0d305316c85b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.567728 kubelet[3407]: E0123 23:57:03.567648 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.569562 kubelet[3407]: E0123 23:57:03.567740 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" Jan 23 23:57:03.569562 kubelet[3407]: E0123 23:57:03.567776 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" Jan 23 23:57:03.569562 kubelet[3407]: E0123 23:57:03.567836 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:03.594570 containerd[2029]: time="2026-01-23T23:57:03.588691956Z" level=error msg="Failed to destroy network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.598603 containerd[2029]: time="2026-01-23T23:57:03.598518180Z" level=error msg="encountered an error cleaning up failed sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.600263 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138-shm.mount: Deactivated successfully. Jan 23 23:57:03.600519 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d-shm.mount: Deactivated successfully. Jan 23 23:57:03.608793 containerd[2029]: time="2026-01-23T23:57:03.608608200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-757987fb54-4nxp8,Uid:50347932-36ba-4e98-91df-99f4d941be59,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.609100 kubelet[3407]: E0123 23:57:03.608958 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.609100 kubelet[3407]: E0123 23:57:03.609032 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757987fb54-4nxp8" Jan 23 23:57:03.609100 kubelet[3407]: E0123 23:57:03.609066 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-757987fb54-4nxp8" Jan 23 23:57:03.611830 kubelet[3407]: E0123 23:57:03.609134 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-757987fb54-4nxp8_calico-system(50347932-36ba-4e98-91df-99f4d941be59)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-757987fb54-4nxp8_calico-system(50347932-36ba-4e98-91df-99f4d941be59)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757987fb54-4nxp8" podUID="50347932-36ba-4e98-91df-99f4d941be59" Jan 23 23:57:03.633920 containerd[2029]: time="2026-01-23T23:57:03.633828804Z" level=error msg="Failed to destroy network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.634604 containerd[2029]: time="2026-01-23T23:57:03.634539960Z" level=error msg="encountered an error cleaning up failed sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.639564 containerd[2029]: time="2026-01-23T23:57:03.634661136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-xgcqc,Uid:532cc4d2-2f64-4521-88b0-26ef20fbd1cc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.640371 kubelet[3407]: E0123 23:57:03.635056 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.640371 kubelet[3407]: E0123 23:57:03.635135 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" Jan 23 23:57:03.640371 kubelet[3407]: E0123 23:57:03.635169 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" Jan 23 23:57:03.640620 kubelet[3407]: E0123 23:57:03.635228 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:03.642563 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044-shm.mount: Deactivated successfully. Jan 23 23:57:03.653318 containerd[2029]: time="2026-01-23T23:57:03.653245920Z" level=error msg="Failed to destroy network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.653908 containerd[2029]: time="2026-01-23T23:57:03.653856648Z" level=error msg="encountered an error cleaning up failed sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.654021 containerd[2029]: time="2026-01-23T23:57:03.653946048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mmfhn,Uid:046ae13d-0e4a-437d-9371-4ba65edfa713,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.654621 kubelet[3407]: E0123 23:57:03.654296 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:03.654621 kubelet[3407]: E0123 23:57:03.654376 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:03.654621 kubelet[3407]: E0123 23:57:03.654407 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-mmfhn" Jan 23 23:57:03.655664 kubelet[3407]: E0123 23:57:03.654899 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:03.660006 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193-shm.mount: Deactivated successfully. Jan 23 23:57:04.050856 systemd[1]: Created slice kubepods-besteffort-pod116c2572_ef7b_49fd_a16b_25d6e19f65b8.slice - libcontainer container kubepods-besteffort-pod116c2572_ef7b_49fd_a16b_25d6e19f65b8.slice. Jan 23 23:57:04.057494 containerd[2029]: time="2026-01-23T23:57:04.056998258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc9cs,Uid:116c2572-ef7b-49fd-a16b-25d6e19f65b8,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:04.172591 containerd[2029]: time="2026-01-23T23:57:04.172523555Z" level=error msg="Failed to destroy network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.173303 containerd[2029]: time="2026-01-23T23:57:04.173257535Z" level=error msg="encountered an error cleaning up failed sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.173485 containerd[2029]: time="2026-01-23T23:57:04.173427335Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc9cs,Uid:116c2572-ef7b-49fd-a16b-25d6e19f65b8,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.175313 kubelet[3407]: E0123 23:57:04.173915 3407 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.175313 kubelet[3407]: E0123 23:57:04.173988 3407 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:57:04.175313 kubelet[3407]: E0123 23:57:04.174021 3407 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wc9cs" Jan 23 23:57:04.175801 kubelet[3407]: E0123 23:57:04.174091 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:04.332057 kubelet[3407]: I0123 23:57:04.331951 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:04.336220 containerd[2029]: time="2026-01-23T23:57:04.334521108Z" level=info msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" Jan 23 23:57:04.336220 containerd[2029]: time="2026-01-23T23:57:04.334822596Z" level=info msg="Ensure that sandbox fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138 in task-service has been cleanup successfully" Jan 23 23:57:04.337307 kubelet[3407]: I0123 23:57:04.337075 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:04.346563 containerd[2029]: time="2026-01-23T23:57:04.344381088Z" level=info msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" Jan 23 23:57:04.346563 containerd[2029]: time="2026-01-23T23:57:04.344955672Z" level=info msg="Ensure that sandbox fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7 in task-service has been cleanup successfully" Jan 23 23:57:04.358400 kubelet[3407]: I0123 23:57:04.358338 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:04.384046 containerd[2029]: time="2026-01-23T23:57:04.383963976Z" level=info msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" Jan 23 23:57:04.384396 containerd[2029]: time="2026-01-23T23:57:04.384340812Z" level=info msg="Ensure that sandbox 6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193 in task-service has been cleanup successfully" Jan 23 23:57:04.397372 kubelet[3407]: I0123 23:57:04.397297 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:04.402160 containerd[2029]: time="2026-01-23T23:57:04.402029964Z" level=info msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" Jan 23 23:57:04.404222 containerd[2029]: time="2026-01-23T23:57:04.403801368Z" level=info msg="Ensure that sandbox c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1 in task-service has been cleanup successfully" Jan 23 23:57:04.408419 kubelet[3407]: I0123 23:57:04.408205 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:04.414807 containerd[2029]: time="2026-01-23T23:57:04.414748656Z" level=info msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" Jan 23 23:57:04.418240 containerd[2029]: time="2026-01-23T23:57:04.418157796Z" level=info msg="Ensure that sandbox 256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044 in task-service has been cleanup successfully" Jan 23 23:57:04.426859 kubelet[3407]: I0123 23:57:04.426264 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:04.436734 containerd[2029]: time="2026-01-23T23:57:04.436673772Z" level=info msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" Jan 23 23:57:04.437029 containerd[2029]: time="2026-01-23T23:57:04.436983396Z" level=info msg="Ensure that sandbox f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5 in task-service has been cleanup successfully" Jan 23 23:57:04.456710 kubelet[3407]: I0123 23:57:04.456629 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:04.461594 containerd[2029]: time="2026-01-23T23:57:04.460612608Z" level=info msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" Jan 23 23:57:04.465561 containerd[2029]: time="2026-01-23T23:57:04.465479484Z" level=info msg="Ensure that sandbox 42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115 in task-service has been cleanup successfully" Jan 23 23:57:04.472585 containerd[2029]: time="2026-01-23T23:57:04.472524540Z" level=error msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" failed" error="failed to destroy network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.473826 kubelet[3407]: E0123 23:57:04.473379 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:04.473826 kubelet[3407]: E0123 23:57:04.473497 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138"} Jan 23 23:57:04.473826 kubelet[3407]: E0123 23:57:04.473585 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"17163695-eef5-4bf6-be5b-0d305316c85b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.473826 kubelet[3407]: E0123 23:57:04.473624 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"17163695-eef5-4bf6-be5b-0d305316c85b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:04.475917 kubelet[3407]: I0123 23:57:04.475289 3407 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:04.482053 containerd[2029]: time="2026-01-23T23:57:04.481816152Z" level=info msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" Jan 23 23:57:04.487011 containerd[2029]: time="2026-01-23T23:57:04.486813696Z" level=info msg="Ensure that sandbox 234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d in task-service has been cleanup successfully" Jan 23 23:57:04.515109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1-shm.mount: Deactivated successfully. Jan 23 23:57:04.549534 containerd[2029]: time="2026-01-23T23:57:04.549043333Z" level=error msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" failed" error="failed to destroy network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.551319 kubelet[3407]: E0123 23:57:04.551213 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:04.552049 kubelet[3407]: E0123 23:57:04.551776 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7"} Jan 23 23:57:04.552049 kubelet[3407]: E0123 23:57:04.551916 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d79f639b-89ce-4a3e-898f-c563a6cc1a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.552049 kubelet[3407]: E0123 23:57:04.551966 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d79f639b-89ce-4a3e-898f-c563a6cc1a21\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xljwn" podUID="d79f639b-89ce-4a3e-898f-c563a6cc1a21" Jan 23 23:57:04.602129 containerd[2029]: time="2026-01-23T23:57:04.601424773Z" level=error msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" failed" error="failed to destroy network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.602265 kubelet[3407]: E0123 23:57:04.601783 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:04.602265 kubelet[3407]: E0123 23:57:04.601855 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115"} Jan 23 23:57:04.602265 kubelet[3407]: E0123 23:57:04.601916 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"471e63ba-4009-4390-becb-d3cf35fc95c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.602265 kubelet[3407]: E0123 23:57:04.601960 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"471e63ba-4009-4390-becb-d3cf35fc95c6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:04.625866 containerd[2029]: time="2026-01-23T23:57:04.625798825Z" level=error msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" failed" error="failed to destroy network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.626654 kubelet[3407]: E0123 23:57:04.626271 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:04.626654 kubelet[3407]: E0123 23:57:04.626342 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d"} Jan 23 23:57:04.626654 kubelet[3407]: E0123 23:57:04.626400 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"50347932-36ba-4e98-91df-99f4d941be59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.626654 kubelet[3407]: E0123 23:57:04.626463 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"50347932-36ba-4e98-91df-99f4d941be59\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-757987fb54-4nxp8" podUID="50347932-36ba-4e98-91df-99f4d941be59" Jan 23 23:57:04.659935 containerd[2029]: time="2026-01-23T23:57:04.659402689Z" level=error msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" failed" error="failed to destroy network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.660095 kubelet[3407]: E0123 23:57:04.659787 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:04.660095 kubelet[3407]: E0123 23:57:04.659856 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044"} Jan 23 23:57:04.660095 kubelet[3407]: E0123 23:57:04.659910 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"532cc4d2-2f64-4521-88b0-26ef20fbd1cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.660095 kubelet[3407]: E0123 23:57:04.659956 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"532cc4d2-2f64-4521-88b0-26ef20fbd1cc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:04.666403 containerd[2029]: time="2026-01-23T23:57:04.665853805Z" level=error msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" failed" error="failed to destroy network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.666889 kubelet[3407]: E0123 23:57:04.666177 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:04.666889 kubelet[3407]: E0123 23:57:04.666243 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193"} Jan 23 23:57:04.666889 kubelet[3407]: E0123 23:57:04.666303 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"046ae13d-0e4a-437d-9371-4ba65edfa713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.666889 kubelet[3407]: E0123 23:57:04.666344 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"046ae13d-0e4a-437d-9371-4ba65edfa713\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:04.676425 containerd[2029]: time="2026-01-23T23:57:04.675896233Z" level=error msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" failed" error="failed to destroy network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.676746 kubelet[3407]: E0123 23:57:04.676430 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:04.676746 kubelet[3407]: E0123 23:57:04.676591 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1"} Jan 23 23:57:04.676746 kubelet[3407]: E0123 23:57:04.676649 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.676746 kubelet[3407]: E0123 23:57:04.676691 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"116c2572-ef7b-49fd-a16b-25d6e19f65b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:04.679034 containerd[2029]: time="2026-01-23T23:57:04.678953941Z" level=error msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" failed" error="failed to destroy network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:04.679499 kubelet[3407]: E0123 23:57:04.679398 3407 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:04.679613 kubelet[3407]: E0123 23:57:04.679530 3407 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5"} Jan 23 23:57:04.679613 kubelet[3407]: E0123 23:57:04.679586 3407 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"761e0c97-a113-4485-8707-6df97f1eaf68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:04.680099 kubelet[3407]: E0123 23:57:04.679624 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"761e0c97-a113-4485-8707-6df97f1eaf68\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-d4dkx" podUID="761e0c97-a113-4485-8707-6df97f1eaf68" Jan 23 23:57:10.225033 systemd[1]: Started sshd@7-172.31.20.253:22-4.153.228.146:43996.service - OpenSSH per-connection server daemon (4.153.228.146:43996). Jan 23 23:57:10.234866 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164111272.mount: Deactivated successfully. Jan 23 23:57:10.291888 containerd[2029]: time="2026-01-23T23:57:10.290510273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:10.293876 containerd[2029]: time="2026-01-23T23:57:10.293828957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:57:10.295845 containerd[2029]: time="2026-01-23T23:57:10.295218893Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:10.321558 containerd[2029]: time="2026-01-23T23:57:10.321473573Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:10.323037 containerd[2029]: time="2026-01-23T23:57:10.322987529Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.987172006s" Jan 23 23:57:10.323190 containerd[2029]: time="2026-01-23T23:57:10.323160701Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:57:10.356167 containerd[2029]: time="2026-01-23T23:57:10.356103774Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:57:10.378285 containerd[2029]: time="2026-01-23T23:57:10.378109146Z" level=info msg="CreateContainer within sandbox \"1948e0d6612a6635f1c091b85e0e4c6d89f72a558d575271f9112ad7821571bc\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"455cc3d750323a875be69491be1ad4454fc483af4dcadefcad0d1243276ce8ad\"" Jan 23 23:57:10.379560 containerd[2029]: time="2026-01-23T23:57:10.379507230Z" level=info msg="StartContainer for \"455cc3d750323a875be69491be1ad4454fc483af4dcadefcad0d1243276ce8ad\"" Jan 23 23:57:10.463792 systemd[1]: Started cri-containerd-455cc3d750323a875be69491be1ad4454fc483af4dcadefcad0d1243276ce8ad.scope - libcontainer container 455cc3d750323a875be69491be1ad4454fc483af4dcadefcad0d1243276ce8ad. Jan 23 23:57:10.572063 containerd[2029]: time="2026-01-23T23:57:10.571988923Z" level=info msg="StartContainer for \"455cc3d750323a875be69491be1ad4454fc483af4dcadefcad0d1243276ce8ad\" returns successfully" Jan 23 23:57:10.805777 sshd[4543]: Accepted publickey for core from 4.153.228.146 port 43996 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:10.828826 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:10.857241 systemd-logind[2003]: New session 8 of user core. Jan 23 23:57:10.871720 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:57:10.871788 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:57:10.868911 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:57:11.383530 containerd[2029]: time="2026-01-23T23:57:11.382181803Z" level=info msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" Jan 23 23:57:11.448832 sshd[4543]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:11.460341 systemd[1]: sshd@7-172.31.20.253:22-4.153.228.146:43996.service: Deactivated successfully. Jan 23 23:57:11.470161 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:57:11.478170 systemd-logind[2003]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:57:11.484686 systemd-logind[2003]: Removed session 8. Jan 23 23:57:11.582508 kubelet[3407]: I0123 23:57:11.581105 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-25822" podStartSLOduration=2.6192385849999997 podStartE2EDuration="18.581066744s" podCreationTimestamp="2026-01-23 23:56:53 +0000 UTC" firstStartedPulling="2026-01-23 23:56:54.362753894 +0000 UTC m=+36.585393111" lastFinishedPulling="2026-01-23 23:57:10.324582053 +0000 UTC m=+52.547221270" observedRunningTime="2026-01-23 23:57:11.578036744 +0000 UTC m=+53.800675997" watchObservedRunningTime="2026-01-23 23:57:11.581066744 +0000 UTC m=+53.803706333" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.680 [INFO][4615] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.681 [INFO][4615] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" iface="eth0" netns="/var/run/netns/cni-c2e8bba1-133b-588b-7db1-578151d56170" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.682 [INFO][4615] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" iface="eth0" netns="/var/run/netns/cni-c2e8bba1-133b-588b-7db1-578151d56170" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.683 [INFO][4615] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" iface="eth0" netns="/var/run/netns/cni-c2e8bba1-133b-588b-7db1-578151d56170" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.683 [INFO][4615] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.683 [INFO][4615] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.782 [INFO][4645] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.783 [INFO][4645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.783 [INFO][4645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.797 [WARNING][4645] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.798 [INFO][4645] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.802 [INFO][4645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.814631 containerd[2029]: 2026-01-23 23:57:11.810 [INFO][4615] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:11.817992 containerd[2029]: time="2026-01-23T23:57:11.817590093Z" level=info msg="TearDown network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" successfully" Jan 23 23:57:11.817992 containerd[2029]: time="2026-01-23T23:57:11.817646937Z" level=info msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" returns successfully" Jan 23 23:57:11.827974 systemd[1]: run-netns-cni\x2dc2e8bba1\x2d133b\x2d588b\x2d7db1\x2d578151d56170.mount: Deactivated successfully. Jan 23 23:57:11.869613 kubelet[3407]: I0123 23:57:11.868192 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50347932-36ba-4e98-91df-99f4d941be59-whisker-ca-bundle\") pod \"50347932-36ba-4e98-91df-99f4d941be59\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " Jan 23 23:57:11.869613 kubelet[3407]: I0123 23:57:11.868319 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9plzf\" (UniqueName: \"kubernetes.io/projected/50347932-36ba-4e98-91df-99f4d941be59-kube-api-access-9plzf\") pod \"50347932-36ba-4e98-91df-99f4d941be59\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " Jan 23 23:57:11.869613 kubelet[3407]: I0123 23:57:11.868390 3407 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50347932-36ba-4e98-91df-99f4d941be59-whisker-backend-key-pair\") pod \"50347932-36ba-4e98-91df-99f4d941be59\" (UID: \"50347932-36ba-4e98-91df-99f4d941be59\") " Jan 23 23:57:11.869613 kubelet[3407]: I0123 23:57:11.868974 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50347932-36ba-4e98-91df-99f4d941be59-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "50347932-36ba-4e98-91df-99f4d941be59" (UID: "50347932-36ba-4e98-91df-99f4d941be59"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:57:11.889135 kubelet[3407]: I0123 23:57:11.889028 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50347932-36ba-4e98-91df-99f4d941be59-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "50347932-36ba-4e98-91df-99f4d941be59" (UID: "50347932-36ba-4e98-91df-99f4d941be59"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:57:11.891534 kubelet[3407]: I0123 23:57:11.889635 3407 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50347932-36ba-4e98-91df-99f4d941be59-kube-api-access-9plzf" (OuterVolumeSpecName: "kube-api-access-9plzf") pod "50347932-36ba-4e98-91df-99f4d941be59" (UID: "50347932-36ba-4e98-91df-99f4d941be59"). InnerVolumeSpecName "kube-api-access-9plzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:57:11.894139 systemd[1]: var-lib-kubelet-pods-50347932\x2d36ba\x2d4e98\x2d91df\x2d99f4d941be59-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:57:11.906907 systemd[1]: var-lib-kubelet-pods-50347932\x2d36ba\x2d4e98\x2d91df\x2d99f4d941be59-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9plzf.mount: Deactivated successfully. Jan 23 23:57:11.969912 kubelet[3407]: I0123 23:57:11.969786 3407 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50347932-36ba-4e98-91df-99f4d941be59-whisker-ca-bundle\") on node \"ip-172-31-20-253\" DevicePath \"\"" Jan 23 23:57:11.969912 kubelet[3407]: I0123 23:57:11.969841 3407 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9plzf\" (UniqueName: \"kubernetes.io/projected/50347932-36ba-4e98-91df-99f4d941be59-kube-api-access-9plzf\") on node \"ip-172-31-20-253\" DevicePath \"\"" Jan 23 23:57:11.969912 kubelet[3407]: I0123 23:57:11.969865 3407 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/50347932-36ba-4e98-91df-99f4d941be59-whisker-backend-key-pair\") on node \"ip-172-31-20-253\" DevicePath \"\"" Jan 23 23:57:12.062915 systemd[1]: Removed slice kubepods-besteffort-pod50347932_36ba_4e98_91df_99f4d941be59.slice - libcontainer container kubepods-besteffort-pod50347932_36ba_4e98_91df_99f4d941be59.slice. Jan 23 23:57:12.655646 systemd[1]: Created slice kubepods-besteffort-pod93ccd330_a859_4470_8f8e_396ff6ffb624.slice - libcontainer container kubepods-besteffort-pod93ccd330_a859_4470_8f8e_396ff6ffb624.slice. Jan 23 23:57:12.675340 kubelet[3407]: I0123 23:57:12.675262 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/93ccd330-a859-4470-8f8e-396ff6ffb624-whisker-ca-bundle\") pod \"whisker-7844db9b64-8ffcr\" (UID: \"93ccd330-a859-4470-8f8e-396ff6ffb624\") " pod="calico-system/whisker-7844db9b64-8ffcr" Jan 23 23:57:12.675340 kubelet[3407]: I0123 23:57:12.675348 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp99b\" (UniqueName: \"kubernetes.io/projected/93ccd330-a859-4470-8f8e-396ff6ffb624-kube-api-access-jp99b\") pod \"whisker-7844db9b64-8ffcr\" (UID: \"93ccd330-a859-4470-8f8e-396ff6ffb624\") " pod="calico-system/whisker-7844db9b64-8ffcr" Jan 23 23:57:12.676017 kubelet[3407]: I0123 23:57:12.675399 3407 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/93ccd330-a859-4470-8f8e-396ff6ffb624-whisker-backend-key-pair\") pod \"whisker-7844db9b64-8ffcr\" (UID: \"93ccd330-a859-4470-8f8e-396ff6ffb624\") " pod="calico-system/whisker-7844db9b64-8ffcr" Jan 23 23:57:12.961931 containerd[2029]: time="2026-01-23T23:57:12.961743622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7844db9b64-8ffcr,Uid:93ccd330-a859-4470-8f8e-396ff6ffb624,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:13.229260 (udev-worker)[4593]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:13.243062 systemd-networkd[1942]: cali74a60d8b08b: Link UP Jan 23 23:57:13.252492 systemd-networkd[1942]: cali74a60d8b08b: Gained carrier Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.036 [INFO][4697] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.058 [INFO][4697] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0 whisker-7844db9b64- calico-system 93ccd330-a859-4470-8f8e-396ff6ffb624 979 0 2026-01-23 23:57:12 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7844db9b64 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-20-253 whisker-7844db9b64-8ffcr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali74a60d8b08b [] [] }} ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.058 [INFO][4697] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.129 [INFO][4708] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" HandleID="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Workload="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.130 [INFO][4708] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" HandleID="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Workload="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002aa340), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-253", "pod":"whisker-7844db9b64-8ffcr", "timestamp":"2026-01-23 23:57:13.129160807 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.130 [INFO][4708] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.130 [INFO][4708] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.131 [INFO][4708] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.149 [INFO][4708] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.161 [INFO][4708] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.169 [INFO][4708] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.173 [INFO][4708] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.176 [INFO][4708] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.177 [INFO][4708] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.180 [INFO][4708] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848 Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.200 [INFO][4708] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.210 [INFO][4708] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.129/26] block=192.168.18.128/26 handle="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.210 [INFO][4708] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.129/26] handle="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" host="ip-172-31-20-253" Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.210 [INFO][4708] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:13.297635 containerd[2029]: 2026-01-23 23:57:13.210 [INFO][4708] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.129/26] IPv6=[] ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" HandleID="k8s-pod-network.539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Workload="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.215 [INFO][4697] cni-plugin/k8s.go 418: Populated endpoint ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0", GenerateName:"whisker-7844db9b64-", Namespace:"calico-system", SelfLink:"", UID:"93ccd330-a859-4470-8f8e-396ff6ffb624", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7844db9b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"whisker-7844db9b64-8ffcr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74a60d8b08b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.215 [INFO][4697] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.129/32] ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.215 [INFO][4697] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74a60d8b08b ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.250 [INFO][4697] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.257 [INFO][4697] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0", GenerateName:"whisker-7844db9b64-", Namespace:"calico-system", SelfLink:"", UID:"93ccd330-a859-4470-8f8e-396ff6ffb624", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7844db9b64", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848", Pod:"whisker-7844db9b64-8ffcr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.18.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali74a60d8b08b", MAC:"c6:09:5e:68:89:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:13.300337 containerd[2029]: 2026-01-23 23:57:13.289 [INFO][4697] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848" Namespace="calico-system" Pod="whisker-7844db9b64-8ffcr" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--7844db9b64--8ffcr-eth0" Jan 23 23:57:13.396389 containerd[2029]: time="2026-01-23T23:57:13.396212853Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:13.397828 containerd[2029]: time="2026-01-23T23:57:13.397407933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:13.398047 containerd[2029]: time="2026-01-23T23:57:13.397543569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:13.398047 containerd[2029]: time="2026-01-23T23:57:13.397733361Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:13.462859 systemd[1]: Started cri-containerd-539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848.scope - libcontainer container 539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848. Jan 23 23:57:13.696241 containerd[2029]: time="2026-01-23T23:57:13.696072334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7844db9b64-8ffcr,Uid:93ccd330-a859-4470-8f8e-396ff6ffb624,Namespace:calico-system,Attempt:0,} returns sandbox id \"539da0aba66fde7b4b47e2e6cfdbc5f6834df4423a828fa74f94d3a84c534848\"" Jan 23 23:57:13.701231 containerd[2029]: time="2026-01-23T23:57:13.700533766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:14.062604 kubelet[3407]: I0123 23:57:14.061237 3407 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50347932-36ba-4e98-91df-99f4d941be59" path="/var/lib/kubelet/pods/50347932-36ba-4e98-91df-99f4d941be59/volumes" Jan 23 23:57:14.198782 containerd[2029]: time="2026-01-23T23:57:14.198721869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:14.202494 containerd[2029]: time="2026-01-23T23:57:14.202038885Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:14.202494 containerd[2029]: time="2026-01-23T23:57:14.202218465Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:14.202800 kubelet[3407]: E0123 23:57:14.202495 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:14.202800 kubelet[3407]: E0123 23:57:14.202570 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:14.210021 kubelet[3407]: E0123 23:57:14.209915 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6be3eac514cecb383af9368500204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:14.213380 containerd[2029]: time="2026-01-23T23:57:14.213305373Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:14.301597 kernel: bpftool[4884]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:57:14.478106 containerd[2029]: time="2026-01-23T23:57:14.478026442Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:14.480673 containerd[2029]: time="2026-01-23T23:57:14.480336682Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:14.480673 containerd[2029]: time="2026-01-23T23:57:14.480434014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:14.481804 kubelet[3407]: E0123 23:57:14.480692 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:14.481804 kubelet[3407]: E0123 23:57:14.480764 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:14.481980 kubelet[3407]: E0123 23:57:14.480969 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:14.482972 kubelet[3407]: E0123 23:57:14.482320 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:14.530229 kubelet[3407]: E0123 23:57:14.530166 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:14.689855 systemd-networkd[1942]: cali74a60d8b08b: Gained IPv6LL Jan 23 23:57:14.822978 systemd-networkd[1942]: vxlan.calico: Link UP Jan 23 23:57:14.822993 systemd-networkd[1942]: vxlan.calico: Gained carrier Jan 23 23:57:14.966809 (udev-worker)[4592]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:15.040438 containerd[2029]: time="2026-01-23T23:57:15.038776125Z" level=info msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.169 [INFO][4930] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.169 [INFO][4930] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" iface="eth0" netns="/var/run/netns/cni-c2bc8985-c7f3-ec9c-7fab-1e1bb6580062" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.171 [INFO][4930] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" iface="eth0" netns="/var/run/netns/cni-c2bc8985-c7f3-ec9c-7fab-1e1bb6580062" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.179 [INFO][4930] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" iface="eth0" netns="/var/run/netns/cni-c2bc8985-c7f3-ec9c-7fab-1e1bb6580062" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.179 [INFO][4930] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.179 [INFO][4930] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.225 [INFO][4937] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.225 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.225 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.244 [WARNING][4937] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.244 [INFO][4937] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.246 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:15.253755 containerd[2029]: 2026-01-23 23:57:15.249 [INFO][4930] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:15.258262 containerd[2029]: time="2026-01-23T23:57:15.255316162Z" level=info msg="TearDown network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" successfully" Jan 23 23:57:15.258262 containerd[2029]: time="2026-01-23T23:57:15.255382330Z" level=info msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" returns successfully" Jan 23 23:57:15.258262 containerd[2029]: time="2026-01-23T23:57:15.258202294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-n88dg,Uid:17163695-eef5-4bf6-be5b-0d305316c85b,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:57:15.261361 systemd[1]: run-netns-cni\x2dc2bc8985\x2dc7f3\x2dec9c\x2d7fab\x2d1e1bb6580062.mount: Deactivated successfully. Jan 23 23:57:15.541259 kubelet[3407]: E0123 23:57:15.539910 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:15.589536 systemd-networkd[1942]: calic97b1363e84: Link UP Jan 23 23:57:15.590041 systemd-networkd[1942]: calic97b1363e84: Gained carrier Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.415 [INFO][4945] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0 calico-apiserver-7fdd48bcf6- calico-apiserver 17163695-eef5-4bf6-be5b-0d305316c85b 1012 0 2026-01-23 23:56:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fdd48bcf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-253 calico-apiserver-7fdd48bcf6-n88dg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic97b1363e84 [] [] }} ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.416 [INFO][4945] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.480 [INFO][4959] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" HandleID="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.480 [INFO][4959] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" HandleID="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-253", "pod":"calico-apiserver-7fdd48bcf6-n88dg", "timestamp":"2026-01-23 23:57:15.480400979 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.480 [INFO][4959] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.481 [INFO][4959] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.481 [INFO][4959] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.499 [INFO][4959] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.507 [INFO][4959] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.521 [INFO][4959] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.526 [INFO][4959] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.537 [INFO][4959] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.537 [INFO][4959] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.546 [INFO][4959] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296 Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.563 [INFO][4959] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.579 [INFO][4959] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.130/26] block=192.168.18.128/26 handle="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.579 [INFO][4959] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.130/26] handle="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" host="ip-172-31-20-253" Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.579 [INFO][4959] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:15.649909 containerd[2029]: 2026-01-23 23:57:15.579 [INFO][4959] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.130/26] IPv6=[] ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" HandleID="k8s-pod-network.abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.583 [INFO][4945] cni-plugin/k8s.go 418: Populated endpoint ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"17163695-eef5-4bf6-be5b-0d305316c85b", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"calico-apiserver-7fdd48bcf6-n88dg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97b1363e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.583 [INFO][4945] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.130/32] ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.584 [INFO][4945] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic97b1363e84 ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.587 [INFO][4945] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.588 [INFO][4945] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"17163695-eef5-4bf6-be5b-0d305316c85b", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296", Pod:"calico-apiserver-7fdd48bcf6-n88dg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97b1363e84", MAC:"c2:38:56:d1:9c:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:15.651787 containerd[2029]: 2026-01-23 23:57:15.643 [INFO][4945] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-n88dg" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:15.733936 containerd[2029]: time="2026-01-23T23:57:15.733756728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:15.733936 containerd[2029]: time="2026-01-23T23:57:15.733852224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:15.734285 containerd[2029]: time="2026-01-23T23:57:15.733879488Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:15.734559 containerd[2029]: time="2026-01-23T23:57:15.734165580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:15.797521 systemd[1]: Started cri-containerd-abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296.scope - libcontainer container abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296. Jan 23 23:57:15.893864 containerd[2029]: time="2026-01-23T23:57:15.893675965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-n88dg,Uid:17163695-eef5-4bf6-be5b-0d305316c85b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296\"" Jan 23 23:57:15.898017 containerd[2029]: time="2026-01-23T23:57:15.897505309Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:16.040317 containerd[2029]: time="2026-01-23T23:57:16.039889342Z" level=info msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" Jan 23 23:57:16.040863 containerd[2029]: time="2026-01-23T23:57:16.040825930Z" level=info msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" Jan 23 23:57:16.176817 containerd[2029]: time="2026-01-23T23:57:16.175622278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:16.179180 containerd[2029]: time="2026-01-23T23:57:16.179021326Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:16.179506 containerd[2029]: time="2026-01-23T23:57:16.179111158Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:16.180024 kubelet[3407]: E0123 23:57:16.179970 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:16.180491 kubelet[3407]: E0123 23:57:16.180183 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:16.180491 kubelet[3407]: E0123 23:57:16.180390 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b545h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:16.182199 kubelet[3407]: E0123 23:57:16.182013 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.172 [INFO][5062] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.172 [INFO][5062] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" iface="eth0" netns="/var/run/netns/cni-098377b3-3408-984e-d0ca-d2d080b1523e" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.173 [INFO][5062] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" iface="eth0" netns="/var/run/netns/cni-098377b3-3408-984e-d0ca-d2d080b1523e" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.174 [INFO][5062] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" iface="eth0" netns="/var/run/netns/cni-098377b3-3408-984e-d0ca-d2d080b1523e" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.174 [INFO][5062] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.174 [INFO][5062] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.255 [INFO][5080] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.256 [INFO][5080] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.256 [INFO][5080] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.275 [WARNING][5080] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.275 [INFO][5080] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.277 [INFO][5080] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.284508 containerd[2029]: 2026-01-23 23:57:16.281 [INFO][5062] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:16.297363 containerd[2029]: time="2026-01-23T23:57:16.292857443Z" level=info msg="TearDown network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" successfully" Jan 23 23:57:16.296621 systemd[1]: run-netns-cni\x2d098377b3\x2d3408\x2d984e\x2dd0ca\x2dd2d080b1523e.mount: Deactivated successfully. Jan 23 23:57:16.298277 containerd[2029]: time="2026-01-23T23:57:16.298022519Z" level=info msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" returns successfully" Jan 23 23:57:16.301305 containerd[2029]: time="2026-01-23T23:57:16.301101875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f867bfb44-djs5n,Uid:471e63ba-4009-4390-becb-d3cf35fc95c6,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.183 [INFO][5071] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.184 [INFO][5071] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" iface="eth0" netns="/var/run/netns/cni-89217fc1-da38-1189-0e8c-e99e9159e5ef" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.186 [INFO][5071] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" iface="eth0" netns="/var/run/netns/cni-89217fc1-da38-1189-0e8c-e99e9159e5ef" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.187 [INFO][5071] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" iface="eth0" netns="/var/run/netns/cni-89217fc1-da38-1189-0e8c-e99e9159e5ef" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.187 [INFO][5071] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.188 [INFO][5071] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.258 [INFO][5085] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.259 [INFO][5085] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.278 [INFO][5085] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.302 [WARNING][5085] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.302 [INFO][5085] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.305 [INFO][5085] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.314873 containerd[2029]: 2026-01-23 23:57:16.309 [INFO][5071] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:16.318636 containerd[2029]: time="2026-01-23T23:57:16.317533859Z" level=info msg="TearDown network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" successfully" Jan 23 23:57:16.318636 containerd[2029]: time="2026-01-23T23:57:16.317579303Z" level=info msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" returns successfully" Jan 23 23:57:16.320730 containerd[2029]: time="2026-01-23T23:57:16.319911215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mmfhn,Uid:046ae13d-0e4a-437d-9371-4ba65edfa713,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:16.322401 systemd[1]: run-netns-cni\x2d89217fc1\x2dda38\x2d1189\x2d0e8c\x2de99e9159e5ef.mount: Deactivated successfully. Jan 23 23:57:16.543913 systemd[1]: Started sshd@8-172.31.20.253:22-4.153.228.146:40668.service - OpenSSH per-connection server daemon (4.153.228.146:40668). Jan 23 23:57:16.572568 kubelet[3407]: E0123 23:57:16.572487 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:16.737675 systemd-networkd[1942]: vxlan.calico: Gained IPv6LL Jan 23 23:57:16.757357 systemd-networkd[1942]: cali50b68ae06a2: Link UP Jan 23 23:57:16.762612 systemd-networkd[1942]: cali50b68ae06a2: Gained carrier Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.471 [INFO][5093] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0 calico-kube-controllers-5f867bfb44- calico-system 471e63ba-4009-4390-becb-d3cf35fc95c6 1033 0 2026-01-23 23:56:54 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f867bfb44 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-20-253 calico-kube-controllers-5f867bfb44-djs5n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali50b68ae06a2 [] [] }} ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.474 [INFO][5093] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.639 [INFO][5116] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" HandleID="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.640 [INFO][5116] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" HandleID="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-253", "pod":"calico-kube-controllers-5f867bfb44-djs5n", "timestamp":"2026-01-23 23:57:16.639232213 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.640 [INFO][5116] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.640 [INFO][5116] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.641 [INFO][5116] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.673 [INFO][5116] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.683 [INFO][5116] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.708 [INFO][5116] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.712 [INFO][5116] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.716 [INFO][5116] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.717 [INFO][5116] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.719 [INFO][5116] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.727 [INFO][5116] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.740 [INFO][5116] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.131/26] block=192.168.18.128/26 handle="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.741 [INFO][5116] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.131/26] handle="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" host="ip-172-31-20-253" Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.741 [INFO][5116] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.836943 containerd[2029]: 2026-01-23 23:57:16.741 [INFO][5116] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.131/26] IPv6=[] ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" HandleID="k8s-pod-network.818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.749 [INFO][5093] cni-plugin/k8s.go 418: Populated endpoint ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0", GenerateName:"calico-kube-controllers-5f867bfb44-", Namespace:"calico-system", SelfLink:"", UID:"471e63ba-4009-4390-becb-d3cf35fc95c6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f867bfb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"calico-kube-controllers-5f867bfb44-djs5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50b68ae06a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.749 [INFO][5093] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.131/32] ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.749 [INFO][5093] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50b68ae06a2 ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.772 [INFO][5093] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.793 [INFO][5093] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0", GenerateName:"calico-kube-controllers-5f867bfb44-", Namespace:"calico-system", SelfLink:"", UID:"471e63ba-4009-4390-becb-d3cf35fc95c6", ResourceVersion:"1033", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f867bfb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c", Pod:"calico-kube-controllers-5f867bfb44-djs5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50b68ae06a2", MAC:"4a:89:77:d9:4a:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.842849 containerd[2029]: 2026-01-23 23:57:16.832 [INFO][5093] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c" Namespace="calico-system" Pod="calico-kube-controllers-5f867bfb44-djs5n" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:16.909798 systemd-networkd[1942]: cali8093fbf5aaf: Link UP Jan 23 23:57:16.914334 systemd-networkd[1942]: cali8093fbf5aaf: Gained carrier Jan 23 23:57:16.942931 containerd[2029]: time="2026-01-23T23:57:16.940094834Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:16.942931 containerd[2029]: time="2026-01-23T23:57:16.940192730Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:16.942931 containerd[2029]: time="2026-01-23T23:57:16.940219898Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:16.942931 containerd[2029]: time="2026-01-23T23:57:16.940369430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.480 [INFO][5103] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0 goldmane-666569f655- calico-system 046ae13d-0e4a-437d-9371-4ba65edfa713 1034 0 2026-01-23 23:56:48 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-20-253 goldmane-666569f655-mmfhn eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali8093fbf5aaf [] [] }} ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.481 [INFO][5103] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.667 [INFO][5119] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" HandleID="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.668 [INFO][5119] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" HandleID="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003354d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-253", "pod":"goldmane-666569f655-mmfhn", "timestamp":"2026-01-23 23:57:16.667119169 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.668 [INFO][5119] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.741 [INFO][5119] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.741 [INFO][5119] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.781 [INFO][5119] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.809 [INFO][5119] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.820 [INFO][5119] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.831 [INFO][5119] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.837 [INFO][5119] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.837 [INFO][5119] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.841 [INFO][5119] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.856 [INFO][5119] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.890 [INFO][5119] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.132/26] block=192.168.18.128/26 handle="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.890 [INFO][5119] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.132/26] handle="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" host="ip-172-31-20-253" Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.890 [INFO][5119] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.977683 containerd[2029]: 2026-01-23 23:57:16.890 [INFO][5119] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.132/26] IPv6=[] ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" HandleID="k8s-pod-network.820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.898 [INFO][5103] cni-plugin/k8s.go 418: Populated endpoint ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"046ae13d-0e4a-437d-9371-4ba65edfa713", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"goldmane-666569f655-mmfhn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8093fbf5aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.898 [INFO][5103] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.132/32] ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.898 [INFO][5103] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8093fbf5aaf ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.924 [INFO][5103] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.930 [INFO][5103] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"046ae13d-0e4a-437d-9371-4ba65edfa713", ResourceVersion:"1034", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce", Pod:"goldmane-666569f655-mmfhn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8093fbf5aaf", MAC:"a2:6e:a6:27:6f:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.978845 containerd[2029]: 2026-01-23 23:57:16.973 [INFO][5103] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce" Namespace="calico-system" Pod="goldmane-666569f655-mmfhn" WorkloadEndpoint="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:17.002833 systemd[1]: Started cri-containerd-818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c.scope - libcontainer container 818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c. Jan 23 23:57:17.040480 containerd[2029]: time="2026-01-23T23:57:17.040283687Z" level=info msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" Jan 23 23:57:17.087235 containerd[2029]: time="2026-01-23T23:57:17.086757755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:17.087235 containerd[2029]: time="2026-01-23T23:57:17.087098195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:17.087235 containerd[2029]: time="2026-01-23T23:57:17.087139439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.089513 containerd[2029]: time="2026-01-23T23:57:17.088831583Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.124263 sshd[5126]: Accepted publickey for core from 4.153.228.146 port 40668 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:17.130068 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:17.156804 systemd[1]: Started cri-containerd-820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce.scope - libcontainer container 820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce. Jan 23 23:57:17.164855 systemd-logind[2003]: New session 9 of user core. Jan 23 23:57:17.171771 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:57:17.228319 containerd[2029]: time="2026-01-23T23:57:17.227064732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f867bfb44-djs5n,Uid:471e63ba-4009-4390-becb-d3cf35fc95c6,Namespace:calico-system,Attempt:1,} returns sandbox id \"818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c\"" Jan 23 23:57:17.232352 containerd[2029]: time="2026-01-23T23:57:17.230918400Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.287 [INFO][5209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.288 [INFO][5209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" iface="eth0" netns="/var/run/netns/cni-5c604f43-f8cd-5f87-ea9c-a245f8d2545e" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.288 [INFO][5209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" iface="eth0" netns="/var/run/netns/cni-5c604f43-f8cd-5f87-ea9c-a245f8d2545e" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.288 [INFO][5209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" iface="eth0" netns="/var/run/netns/cni-5c604f43-f8cd-5f87-ea9c-a245f8d2545e" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.288 [INFO][5209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.288 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.374 [INFO][5248] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.376 [INFO][5248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.376 [INFO][5248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.393 [WARNING][5248] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.396 [INFO][5248] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.404 [INFO][5248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:17.410890 containerd[2029]: 2026-01-23 23:57:17.407 [INFO][5209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:57:17.416848 containerd[2029]: time="2026-01-23T23:57:17.413971237Z" level=info msg="TearDown network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" successfully" Jan 23 23:57:17.416848 containerd[2029]: time="2026-01-23T23:57:17.416528893Z" level=info msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" returns successfully" Jan 23 23:57:17.417719 containerd[2029]: time="2026-01-23T23:57:17.417633625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc9cs,Uid:116c2572-ef7b-49fd-a16b-25d6e19f65b8,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:17.419971 systemd[1]: run-netns-cni\x2d5c604f43\x2df8cd\x2d5f87\x2dea9c\x2da245f8d2545e.mount: Deactivated successfully. Jan 23 23:57:17.513615 containerd[2029]: time="2026-01-23T23:57:17.511697485Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:17.523513 containerd[2029]: time="2026-01-23T23:57:17.517638241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:17.523513 containerd[2029]: time="2026-01-23T23:57:17.517812601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:17.523733 kubelet[3407]: E0123 23:57:17.518097 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:17.523733 kubelet[3407]: E0123 23:57:17.518165 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:17.523733 kubelet[3407]: E0123 23:57:17.518360 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:17.523733 kubelet[3407]: E0123 23:57:17.520275 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:17.569702 systemd-networkd[1942]: calic97b1363e84: Gained IPv6LL Jan 23 23:57:17.595733 kubelet[3407]: E0123 23:57:17.595677 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:17.598526 kubelet[3407]: E0123 23:57:17.597754 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:17.752287 containerd[2029]: time="2026-01-23T23:57:17.752110238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-mmfhn,Uid:046ae13d-0e4a-437d-9371-4ba65edfa713,Namespace:calico-system,Attempt:1,} returns sandbox id \"820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce\"" Jan 23 23:57:17.756314 containerd[2029]: time="2026-01-23T23:57:17.756248630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:17.847844 sshd[5126]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:17.859257 systemd[1]: sshd@8-172.31.20.253:22-4.153.228.146:40668.service: Deactivated successfully. Jan 23 23:57:17.869624 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:17.879740 systemd-logind[2003]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:17.885566 systemd-logind[2003]: Removed session 9. Jan 23 23:57:18.017782 systemd-networkd[1942]: cali50b68ae06a2: Gained IPv6LL Jan 23 23:57:18.028231 systemd-networkd[1942]: calidf0375cbfb5: Link UP Jan 23 23:57:18.030748 systemd-networkd[1942]: calidf0375cbfb5: Gained carrier Jan 23 23:57:18.032489 containerd[2029]: time="2026-01-23T23:57:18.031952940Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:18.039108 containerd[2029]: time="2026-01-23T23:57:18.038171448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:18.039108 containerd[2029]: time="2026-01-23T23:57:18.038273160Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:18.042070 kubelet[3407]: E0123 23:57:18.039681 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:18.042070 kubelet[3407]: E0123 23:57:18.039770 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:18.042070 kubelet[3407]: E0123 23:57:18.039949 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clm5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:18.043598 kubelet[3407]: E0123 23:57:18.043072 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:18.057957 containerd[2029]: time="2026-01-23T23:57:18.056604624Z" level=info msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" Jan 23 23:57:18.058311 containerd[2029]: time="2026-01-23T23:57:18.058265244Z" level=info msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" Jan 23 23:57:18.104714 containerd[2029]: time="2026-01-23T23:57:18.104660100Z" level=info msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.754 [INFO][5262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0 csi-node-driver- calico-system 116c2572-ef7b-49fd-a16b-25d6e19f65b8 1055 0 2026-01-23 23:56:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-20-253 csi-node-driver-wc9cs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calidf0375cbfb5 [] [] }} ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.754 [INFO][5262] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.892 [INFO][5283] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" HandleID="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.893 [INFO][5283] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" HandleID="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c950), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-20-253", "pod":"csi-node-driver-wc9cs", "timestamp":"2026-01-23 23:57:17.891685803 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.893 [INFO][5283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.893 [INFO][5283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.893 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.916 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.934 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.954 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.965 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.973 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.973 [INFO][5283] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.984 [INFO][5283] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:17.995 [INFO][5283] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:18.011 [INFO][5283] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.133/26] block=192.168.18.128/26 handle="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:18.012 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.133/26] handle="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" host="ip-172-31-20-253" Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:18.012 [INFO][5283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.118413 containerd[2029]: 2026-01-23 23:57:18.012 [INFO][5283] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.133/26] IPv6=[] ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" HandleID="k8s-pod-network.4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.017 [INFO][5262] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"116c2572-ef7b-49fd-a16b-25d6e19f65b8", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"csi-node-driver-wc9cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf0375cbfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.017 [INFO][5262] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.133/32] ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.019 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidf0375cbfb5 ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.037 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.042 [INFO][5262] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"116c2572-ef7b-49fd-a16b-25d6e19f65b8", ResourceVersion:"1055", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e", Pod:"csi-node-driver-wc9cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf0375cbfb5", MAC:"42:f9:d2:10:e7:d9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.120797 containerd[2029]: 2026-01-23 23:57:18.081 [INFO][5262] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e" Namespace="calico-system" Pod="csi-node-driver-wc9cs" WorkloadEndpoint="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:57:18.225134 containerd[2029]: time="2026-01-23T23:57:18.223880737Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:18.225134 containerd[2029]: time="2026-01-23T23:57:18.223973401Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:18.225134 containerd[2029]: time="2026-01-23T23:57:18.223999609Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:18.230111 containerd[2029]: time="2026-01-23T23:57:18.229570897Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:18.357056 systemd[1]: Started cri-containerd-4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e.scope - libcontainer container 4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e. Jan 23 23:57:18.402810 systemd-networkd[1942]: cali8093fbf5aaf: Gained IPv6LL Jan 23 23:57:18.566927 containerd[2029]: time="2026-01-23T23:57:18.566772194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wc9cs,Uid:116c2572-ef7b-49fd-a16b-25d6e19f65b8,Namespace:calico-system,Attempt:1,} returns sandbox id \"4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e\"" Jan 23 23:57:18.584478 containerd[2029]: time="2026-01-23T23:57:18.581253614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:18.645550 kubelet[3407]: E0123 23:57:18.644821 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:18.645550 kubelet[3407]: E0123 23:57:18.645335 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.482 [INFO][5328] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.484 [INFO][5328] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" iface="eth0" netns="/var/run/netns/cni-bf792ef3-9b57-7357-58b9-ea94a3527763" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.487 [INFO][5328] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" iface="eth0" netns="/var/run/netns/cni-bf792ef3-9b57-7357-58b9-ea94a3527763" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.489 [INFO][5328] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" iface="eth0" netns="/var/run/netns/cni-bf792ef3-9b57-7357-58b9-ea94a3527763" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.489 [INFO][5328] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.489 [INFO][5328] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.675 [INFO][5394] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.677 [INFO][5394] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.677 [INFO][5394] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.719 [WARNING][5394] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.720 [INFO][5394] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.731 [INFO][5394] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.777559 containerd[2029]: 2026-01-23 23:57:18.758 [INFO][5328] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:57:18.781529 containerd[2029]: time="2026-01-23T23:57:18.779121363Z" level=info msg="TearDown network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" successfully" Jan 23 23:57:18.781529 containerd[2029]: time="2026-01-23T23:57:18.779173539Z" level=info msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" returns successfully" Jan 23 23:57:18.787236 systemd[1]: run-netns-cni\x2dbf792ef3\x2d9b57\x2d7357\x2d58b9\x2dea94a3527763.mount: Deactivated successfully. Jan 23 23:57:18.792378 containerd[2029]: time="2026-01-23T23:57:18.791909715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xljwn,Uid:d79f639b-89ce-4a3e-898f-c563a6cc1a21,Namespace:kube-system,Attempt:1,}" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.407 [WARNING][5331] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.416 [INFO][5331] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.420 [INFO][5331] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" iface="eth0" netns="" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.424 [INFO][5331] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.424 [INFO][5331] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.697 [INFO][5386] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.701 [INFO][5386] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.731 [INFO][5386] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.770 [WARNING][5386] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.770 [INFO][5386] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.775 [INFO][5386] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.799102 containerd[2029]: 2026-01-23 23:57:18.786 [INFO][5331] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:18.801731 containerd[2029]: time="2026-01-23T23:57:18.801673479Z" level=info msg="TearDown network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" successfully" Jan 23 23:57:18.801955 containerd[2029]: time="2026-01-23T23:57:18.801879231Z" level=info msg="StopPodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" returns successfully" Jan 23 23:57:18.804881 containerd[2029]: time="2026-01-23T23:57:18.804722704Z" level=info msg="RemovePodSandbox for \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" Jan 23 23:57:18.805822 containerd[2029]: time="2026-01-23T23:57:18.804848608Z" level=info msg="Forcibly stopping sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\"" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.471 [INFO][5329] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.472 [INFO][5329] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" iface="eth0" netns="/var/run/netns/cni-e399b4ff-e475-989b-6ba2-a6579026b367" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.476 [INFO][5329] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" iface="eth0" netns="/var/run/netns/cni-e399b4ff-e475-989b-6ba2-a6579026b367" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.478 [INFO][5329] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" iface="eth0" netns="/var/run/netns/cni-e399b4ff-e475-989b-6ba2-a6579026b367" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.479 [INFO][5329] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.479 [INFO][5329] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.733 [INFO][5392] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.733 [INFO][5392] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.776 [INFO][5392] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.813 [WARNING][5392] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.813 [INFO][5392] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.821 [INFO][5392] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.863023 containerd[2029]: 2026-01-23 23:57:18.836 [INFO][5329] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:57:18.870485 containerd[2029]: time="2026-01-23T23:57:18.868369024Z" level=info msg="TearDown network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" successfully" Jan 23 23:57:18.870485 containerd[2029]: time="2026-01-23T23:57:18.868419712Z" level=info msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" returns successfully" Jan 23 23:57:18.872180 systemd[1]: run-netns-cni\x2de399b4ff\x2de475\x2d989b\x2d6ba2\x2da6579026b367.mount: Deactivated successfully. Jan 23 23:57:18.874776 containerd[2029]: time="2026-01-23T23:57:18.873676036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-xgcqc,Uid:532cc4d2-2f64-4521-88b0-26ef20fbd1cc,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:57:18.915396 containerd[2029]: time="2026-01-23T23:57:18.915236824Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:18.919495 containerd[2029]: time="2026-01-23T23:57:18.918965476Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:18.922126 containerd[2029]: time="2026-01-23T23:57:18.921915148Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:18.923062 kubelet[3407]: E0123 23:57:18.922725 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:18.923062 kubelet[3407]: E0123 23:57:18.922796 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:18.933947 kubelet[3407]: E0123 23:57:18.933302 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:18.939567 containerd[2029]: time="2026-01-23T23:57:18.939345196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:19.268176 containerd[2029]: time="2026-01-23T23:57:19.266632058Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:19.273157 containerd[2029]: time="2026-01-23T23:57:19.272520398Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:19.273157 containerd[2029]: time="2026-01-23T23:57:19.272944226Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:19.274930 kubelet[3407]: E0123 23:57:19.274023 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:19.275244 kubelet[3407]: E0123 23:57:19.274974 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:19.277367 kubelet[3407]: E0123 23:57:19.275410 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:19.277367 kubelet[3407]: E0123 23:57:19.277202 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:19.297989 systemd-networkd[1942]: calidf0375cbfb5: Gained IPv6LL Jan 23 23:57:19.368031 systemd-networkd[1942]: cali9ded01ceede: Link UP Jan 23 23:57:19.371781 systemd-networkd[1942]: cali9ded01ceede: Gained carrier Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.119 [INFO][5429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0 coredns-668d6bf9bc- kube-system d79f639b-89ce-4a3e-898f-c563a6cc1a21 1081 0 2026-01-23 23:56:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-253 coredns-668d6bf9bc-xljwn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9ded01ceede [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.120 [INFO][5429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.238 [INFO][5467] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" HandleID="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.239 [INFO][5467] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" HandleID="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120320), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-253", "pod":"coredns-668d6bf9bc-xljwn", "timestamp":"2026-01-23 23:57:19.238772186 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.239 [INFO][5467] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.239 [INFO][5467] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.239 [INFO][5467] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.272 [INFO][5467] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.290 [INFO][5467] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.302 [INFO][5467] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.307 [INFO][5467] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.312 [INFO][5467] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.312 [INFO][5467] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.315 [INFO][5467] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49 Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.341 [INFO][5467] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.355 [INFO][5467] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.134/26] block=192.168.18.128/26 handle="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.355 [INFO][5467] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.134/26] handle="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" host="ip-172-31-20-253" Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.355 [INFO][5467] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.423113 containerd[2029]: 2026-01-23 23:57:19.355 [INFO][5467] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.134/26] IPv6=[] ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" HandleID="k8s-pod-network.cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.361 [INFO][5429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d79f639b-89ce-4a3e-898f-c563a6cc1a21", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"coredns-668d6bf9bc-xljwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ded01ceede", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.361 [INFO][5429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.134/32] ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.361 [INFO][5429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ded01ceede ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.376 [INFO][5429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.379 [INFO][5429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d79f639b-89ce-4a3e-898f-c563a6cc1a21", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49", Pod:"coredns-668d6bf9bc-xljwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ded01ceede", MAC:"ca:fb:b4:41:6b:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.428010 containerd[2029]: 2026-01-23 23:57:19.416 [INFO][5429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49" Namespace="kube-system" Pod="coredns-668d6bf9bc-xljwn" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:57:19.495939 containerd[2029]: time="2026-01-23T23:57:19.495508731Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:19.495939 containerd[2029]: time="2026-01-23T23:57:19.495778467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:19.496565 containerd[2029]: time="2026-01-23T23:57:19.495901179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:19.497151 containerd[2029]: time="2026-01-23T23:57:19.496936479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:19.585529 systemd-networkd[1942]: cali8cf4b600d7a: Link UP Jan 23 23:57:19.591064 systemd-networkd[1942]: cali8cf4b600d7a: Gained carrier Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.114 [WARNING][5424] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" WorkloadEndpoint="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.117 [INFO][5424] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.118 [INFO][5424] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" iface="eth0" netns="" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.118 [INFO][5424] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.119 [INFO][5424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.294 [INFO][5457] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.297 [INFO][5457] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.534 [INFO][5457] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.558 [WARNING][5457] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.559 [INFO][5457] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" HandleID="k8s-pod-network.234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Workload="ip--172--31--20--253-k8s-whisker--757987fb54--4nxp8-eth0" Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.574 [INFO][5457] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.623298 containerd[2029]: 2026-01-23 23:57:19.604 [INFO][5424] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d" Jan 23 23:57:19.632969 containerd[2029]: time="2026-01-23T23:57:19.630809368Z" level=info msg="TearDown network for sandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" successfully" Jan 23 23:57:19.661646 kubelet[3407]: E0123 23:57:19.660655 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:19.667816 kubelet[3407]: E0123 23:57:19.667199 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:19.670064 systemd[1]: Started cri-containerd-cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49.scope - libcontainer container cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49. Jan 23 23:57:19.672480 containerd[2029]: time="2026-01-23T23:57:19.671799652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:19.672480 containerd[2029]: time="2026-01-23T23:57:19.671908000Z" level=info msg="RemovePodSandbox \"234ccb0ecd1ad73d2f708782d01cc457f4de70f1ddc4d6397700c85806ffdc7d\" returns successfully" Jan 23 23:57:19.675493 containerd[2029]: time="2026-01-23T23:57:19.675097132Z" level=info msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.129 [INFO][5438] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0 calico-apiserver-7fdd48bcf6- calico-apiserver 532cc4d2-2f64-4521-88b0-26ef20fbd1cc 1080 0 2026-01-23 23:56:37 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7fdd48bcf6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-20-253 calico-apiserver-7fdd48bcf6-xgcqc eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8cf4b600d7a [] [] }} ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.130 [INFO][5438] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.292 [INFO][5462] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" HandleID="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.292 [INFO][5462] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" HandleID="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000333400), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-20-253", "pod":"calico-apiserver-7fdd48bcf6-xgcqc", "timestamp":"2026-01-23 23:57:19.292108454 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.293 [INFO][5462] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.356 [INFO][5462] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.356 [INFO][5462] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.396 [INFO][5462] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.422 [INFO][5462] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.450 [INFO][5462] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.459 [INFO][5462] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.472 [INFO][5462] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.472 [INFO][5462] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.480 [INFO][5462] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.493 [INFO][5462] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.531 [INFO][5462] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.135/26] block=192.168.18.128/26 handle="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.531 [INFO][5462] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.135/26] handle="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" host="ip-172-31-20-253" Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.532 [INFO][5462] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.682617 containerd[2029]: 2026-01-23 23:57:19.532 [INFO][5462] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.135/26] IPv6=[] ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" HandleID="k8s-pod-network.40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.552 [INFO][5438] cni-plugin/k8s.go 418: Populated endpoint ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"532cc4d2-2f64-4521-88b0-26ef20fbd1cc", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"calico-apiserver-7fdd48bcf6-xgcqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf4b600d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.553 [INFO][5438] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.135/32] ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.553 [INFO][5438] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8cf4b600d7a ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.589 [INFO][5438] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.601 [INFO][5438] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"532cc4d2-2f64-4521-88b0-26ef20fbd1cc", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa", Pod:"calico-apiserver-7fdd48bcf6-xgcqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf4b600d7a", MAC:"ce:29:ee:eb:76:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.685646 containerd[2029]: 2026-01-23 23:57:19.656 [INFO][5438] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa" Namespace="calico-apiserver" Pod="calico-apiserver-7fdd48bcf6-xgcqc" WorkloadEndpoint="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:57:19.794556 containerd[2029]: time="2026-01-23T23:57:19.793749556Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:19.794556 containerd[2029]: time="2026-01-23T23:57:19.793878940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:19.794556 containerd[2029]: time="2026-01-23T23:57:19.793915588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:19.794556 containerd[2029]: time="2026-01-23T23:57:19.794106736Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:19.858779 systemd[1]: Started cri-containerd-40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa.scope - libcontainer container 40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa. Jan 23 23:57:19.912924 containerd[2029]: time="2026-01-23T23:57:19.912683081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xljwn,Uid:d79f639b-89ce-4a3e-898f-c563a6cc1a21,Namespace:kube-system,Attempt:1,} returns sandbox id \"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49\"" Jan 23 23:57:19.966051 containerd[2029]: time="2026-01-23T23:57:19.965407889Z" level=info msg="CreateContainer within sandbox \"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:20.009507 containerd[2029]: time="2026-01-23T23:57:20.008111833Z" level=info msg="CreateContainer within sandbox \"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0f395aab1875f9e101bdd361ebc152312194072992f3148c3c30e42d0a9cea5c\"" Jan 23 23:57:20.022752 containerd[2029]: time="2026-01-23T23:57:20.022390886Z" level=info msg="StartContainer for \"0f395aab1875f9e101bdd361ebc152312194072992f3148c3c30e42d0a9cea5c\"" Jan 23 23:57:20.043631 containerd[2029]: time="2026-01-23T23:57:20.042610382Z" level=info msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" Jan 23 23:57:20.180879 systemd[1]: Started cri-containerd-0f395aab1875f9e101bdd361ebc152312194072992f3148c3c30e42d0a9cea5c.scope - libcontainer container 0f395aab1875f9e101bdd361ebc152312194072992f3148c3c30e42d0a9cea5c. Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:19.947 [WARNING][5537] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"17163695-eef5-4bf6-be5b-0d305316c85b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296", Pod:"calico-apiserver-7fdd48bcf6-n88dg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97b1363e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:19.951 [INFO][5537] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:19.951 [INFO][5537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" iface="eth0" netns="" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:19.951 [INFO][5537] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:19.951 [INFO][5537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.106 [INFO][5586] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.106 [INFO][5586] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.106 [INFO][5586] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.144 [WARNING][5586] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.144 [INFO][5586] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.162 [INFO][5586] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:20.194743 containerd[2029]: 2026-01-23 23:57:20.173 [INFO][5537] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.194743 containerd[2029]: time="2026-01-23T23:57:20.194154158Z" level=info msg="TearDown network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" successfully" Jan 23 23:57:20.194743 containerd[2029]: time="2026-01-23T23:57:20.194192246Z" level=info msg="StopPodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" returns successfully" Jan 23 23:57:20.196983 containerd[2029]: time="2026-01-23T23:57:20.195080246Z" level=info msg="RemovePodSandbox for \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" Jan 23 23:57:20.196983 containerd[2029]: time="2026-01-23T23:57:20.195131474Z" level=info msg="Forcibly stopping sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\"" Jan 23 23:57:20.274768 containerd[2029]: time="2026-01-23T23:57:20.274318395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7fdd48bcf6-xgcqc,Uid:532cc4d2-2f64-4521-88b0-26ef20fbd1cc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa\"" Jan 23 23:57:20.298004 containerd[2029]: time="2026-01-23T23:57:20.297750807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:20.339255 containerd[2029]: time="2026-01-23T23:57:20.335708619Z" level=info msg="StartContainer for \"0f395aab1875f9e101bdd361ebc152312194072992f3148c3c30e42d0a9cea5c\" returns successfully" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.394 [INFO][5605] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.394 [INFO][5605] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" iface="eth0" netns="/var/run/netns/cni-21d7110d-3613-75b7-705d-ffd53a069e8d" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.394 [INFO][5605] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" iface="eth0" netns="/var/run/netns/cni-21d7110d-3613-75b7-705d-ffd53a069e8d" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.395 [INFO][5605] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" iface="eth0" netns="/var/run/netns/cni-21d7110d-3613-75b7-705d-ffd53a069e8d" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.395 [INFO][5605] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.395 [INFO][5605] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.479 [INFO][5662] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.480 [INFO][5662] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.480 [INFO][5662] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.503 [WARNING][5662] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.503 [INFO][5662] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.508 [INFO][5662] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:20.523125 containerd[2029]: 2026-01-23 23:57:20.516 [INFO][5605] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:57:20.526617 containerd[2029]: time="2026-01-23T23:57:20.526557568Z" level=info msg="TearDown network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" successfully" Jan 23 23:57:20.528478 containerd[2029]: time="2026-01-23T23:57:20.527367868Z" level=info msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" returns successfully" Jan 23 23:57:20.533424 systemd[1]: run-netns-cni\x2d21d7110d\x2d3613\x2d75b7\x2d705d\x2dffd53a069e8d.mount: Deactivated successfully. Jan 23 23:57:20.535101 containerd[2029]: time="2026-01-23T23:57:20.534793612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4dkx,Uid:761e0c97-a113-4485-8707-6df97f1eaf68,Namespace:kube-system,Attempt:1,}" Jan 23 23:57:20.622318 containerd[2029]: time="2026-01-23T23:57:20.621635969Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:20.624893 containerd[2029]: time="2026-01-23T23:57:20.624520253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:20.628685 containerd[2029]: time="2026-01-23T23:57:20.624782861Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:20.630717 kubelet[3407]: E0123 23:57:20.629386 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:20.630717 kubelet[3407]: E0123 23:57:20.629476 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:20.635794 kubelet[3407]: E0123 23:57:20.634305 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:20.640293 kubelet[3407]: E0123 23:57:20.638862 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:20.642290 systemd-networkd[1942]: cali8cf4b600d7a: Gained IPv6LL Jan 23 23:57:20.707553 kubelet[3407]: E0123 23:57:20.705424 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.477 [WARNING][5647] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"17163695-eef5-4bf6-be5b-0d305316c85b", ResourceVersion:"1064", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"abadded3329d54bd72abbaf3ba68b8d88920610ee9516060ffd57a71e9c9a296", Pod:"calico-apiserver-7fdd48bcf6-n88dg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic97b1363e84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.477 [INFO][5647] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.477 [INFO][5647] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" iface="eth0" netns="" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.477 [INFO][5647] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.477 [INFO][5647] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.624 [INFO][5672] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.626 [INFO][5672] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.626 [INFO][5672] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.674 [WARNING][5672] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.674 [INFO][5672] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" HandleID="k8s-pod-network.fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--n88dg-eth0" Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.683 [INFO][5672] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:20.712892 containerd[2029]: 2026-01-23 23:57:20.699 [INFO][5647] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138" Jan 23 23:57:20.714148 containerd[2029]: time="2026-01-23T23:57:20.712928981Z" level=info msg="TearDown network for sandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" successfully" Jan 23 23:57:20.728467 containerd[2029]: time="2026-01-23T23:57:20.728234885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:20.728467 containerd[2029]: time="2026-01-23T23:57:20.728334497Z" level=info msg="RemovePodSandbox \"fc1a00747c14885315e2d8bd87c13e806c81b7b5d28848f468fbf7da4f92b138\" returns successfully" Jan 23 23:57:20.731185 containerd[2029]: time="2026-01-23T23:57:20.730777601Z" level=info msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" Jan 23 23:57:20.795175 kubelet[3407]: I0123 23:57:20.793852 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xljwn" podStartSLOduration=57.793708997 podStartE2EDuration="57.793708997s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:20.737873141 +0000 UTC m=+62.960512382" watchObservedRunningTime="2026-01-23 23:57:20.793708997 +0000 UTC m=+63.016348238" Jan 23 23:57:20.962725 systemd-networkd[1942]: cali9ded01ceede: Gained IPv6LL Jan 23 23:57:21.110112 systemd-networkd[1942]: calib3e29952470: Link UP Jan 23 23:57:21.112733 systemd-networkd[1942]: calib3e29952470: Gained carrier Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:20.922 [WARNING][5706] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"046ae13d-0e4a-437d-9371-4ba65edfa713", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce", Pod:"goldmane-666569f655-mmfhn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8093fbf5aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:20.922 [INFO][5706] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:20.922 [INFO][5706] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" iface="eth0" netns="" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:20.922 [INFO][5706] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:20.922 [INFO][5706] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.015 [INFO][5721] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.016 [INFO][5721] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.092 [INFO][5721] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.149 [WARNING][5721] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.150 [INFO][5721] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.158 [INFO][5721] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:21.169738 containerd[2029]: 2026-01-23 23:57:21.163 [INFO][5706] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.170657 containerd[2029]: time="2026-01-23T23:57:21.170595939Z" level=info msg="TearDown network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" successfully" Jan 23 23:57:21.170739 containerd[2029]: time="2026-01-23T23:57:21.170650407Z" level=info msg="StopPodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" returns successfully" Jan 23 23:57:21.176141 containerd[2029]: time="2026-01-23T23:57:21.175955103Z" level=info msg="RemovePodSandbox for \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" Jan 23 23:57:21.176141 containerd[2029]: time="2026-01-23T23:57:21.176019327Z" level=info msg="Forcibly stopping sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\"" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.850 [INFO][5680] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0 coredns-668d6bf9bc- kube-system 761e0c97-a113-4485-8707-6df97f1eaf68 1126 0 2026-01-23 23:56:23 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-20-253 coredns-668d6bf9bc-d4dkx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib3e29952470 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.852 [INFO][5680] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.978 [INFO][5712] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" HandleID="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.978 [INFO][5712] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" HandleID="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d740), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-20-253", "pod":"coredns-668d6bf9bc-d4dkx", "timestamp":"2026-01-23 23:57:20.978412986 +0000 UTC"}, Hostname:"ip-172-31-20-253", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.979 [INFO][5712] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.979 [INFO][5712] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:20.979 [INFO][5712] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-20-253' Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.002 [INFO][5712] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.022 [INFO][5712] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.034 [INFO][5712] ipam/ipam.go 511: Trying affinity for 192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.040 [INFO][5712] ipam/ipam.go 158: Attempting to load block cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.047 [INFO][5712] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.18.128/26 host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.047 [INFO][5712] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.18.128/26 handle="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.057 [INFO][5712] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25 Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.068 [INFO][5712] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.18.128/26 handle="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.092 [INFO][5712] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.18.136/26] block=192.168.18.128/26 handle="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.092 [INFO][5712] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.18.136/26] handle="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" host="ip-172-31-20-253" Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.093 [INFO][5712] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:21.209588 containerd[2029]: 2026-01-23 23:57:21.093 [INFO][5712] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.18.136/26] IPv6=[] ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" HandleID="k8s-pod-network.20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.101 [INFO][5680] cni-plugin/k8s.go 418: Populated endpoint ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"761e0c97-a113-4485-8707-6df97f1eaf68", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"", Pod:"coredns-668d6bf9bc-d4dkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3e29952470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.101 [INFO][5680] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.18.136/32] ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.101 [INFO][5680] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3e29952470 ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.128 [INFO][5680] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.139 [INFO][5680] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"761e0c97-a113-4485-8707-6df97f1eaf68", ResourceVersion:"1126", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25", Pod:"coredns-668d6bf9bc-d4dkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3e29952470", MAC:"f6:1f:5b:92:ef:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.212351 containerd[2029]: 2026-01-23 23:57:21.189 [INFO][5680] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25" Namespace="kube-system" Pod="coredns-668d6bf9bc-d4dkx" WorkloadEndpoint="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:57:21.302663 containerd[2029]: time="2026-01-23T23:57:21.299861128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:21.302663 containerd[2029]: time="2026-01-23T23:57:21.299988712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:21.302663 containerd[2029]: time="2026-01-23T23:57:21.300026428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.304832 containerd[2029]: time="2026-01-23T23:57:21.304241668Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:21.373135 systemd[1]: Started cri-containerd-20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25.scope - libcontainer container 20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25. Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.318 [WARNING][5744] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"046ae13d-0e4a-437d-9371-4ba65edfa713", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"820f688a346bcd8dd2d6211229c6e541b6c0cd75b021f20604efffc7bc4bdfce", Pod:"goldmane-666569f655-mmfhn", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.18.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali8093fbf5aaf", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.318 [INFO][5744] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.318 [INFO][5744] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" iface="eth0" netns="" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.318 [INFO][5744] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.318 [INFO][5744] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.443 [INFO][5778] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.443 [INFO][5778] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.444 [INFO][5778] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.468 [WARNING][5778] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.468 [INFO][5778] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" HandleID="k8s-pod-network.6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Workload="ip--172--31--20--253-k8s-goldmane--666569f655--mmfhn-eth0" Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.473 [INFO][5778] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:21.483141 containerd[2029]: 2026-01-23 23:57:21.477 [INFO][5744] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193" Jan 23 23:57:21.485071 containerd[2029]: time="2026-01-23T23:57:21.483535637Z" level=info msg="TearDown network for sandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" successfully" Jan 23 23:57:21.489880 containerd[2029]: time="2026-01-23T23:57:21.489829961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-d4dkx,Uid:761e0c97-a113-4485-8707-6df97f1eaf68,Namespace:kube-system,Attempt:1,} returns sandbox id \"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25\"" Jan 23 23:57:21.494078 containerd[2029]: time="2026-01-23T23:57:21.493968977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:21.494396 containerd[2029]: time="2026-01-23T23:57:21.494081873Z" level=info msg="RemovePodSandbox \"6f0d7b341028a15c54065d8b43b08a324ce94ad5497b45c73b63d56d6e5d6193\" returns successfully" Jan 23 23:57:21.494877 containerd[2029]: time="2026-01-23T23:57:21.494825261Z" level=info msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" Jan 23 23:57:21.541293 containerd[2029]: time="2026-01-23T23:57:21.541133633Z" level=info msg="CreateContainer within sandbox \"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:21.599992 containerd[2029]: time="2026-01-23T23:57:21.599930813Z" level=info msg="CreateContainer within sandbox \"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cc520798d7c799f2d973018feae20b3f3ae0d6d0b7999b03255d77d47fedc47\"" Jan 23 23:57:21.606496 containerd[2029]: time="2026-01-23T23:57:21.605880125Z" level=info msg="StartContainer for \"7cc520798d7c799f2d973018feae20b3f3ae0d6d0b7999b03255d77d47fedc47\"" Jan 23 23:57:21.687653 systemd[1]: Started cri-containerd-7cc520798d7c799f2d973018feae20b3f3ae0d6d0b7999b03255d77d47fedc47.scope - libcontainer container 7cc520798d7c799f2d973018feae20b3f3ae0d6d0b7999b03255d77d47fedc47. Jan 23 23:57:21.747154 kubelet[3407]: E0123 23:57:21.747038 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.657 [WARNING][5814] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0", GenerateName:"calico-kube-controllers-5f867bfb44-", Namespace:"calico-system", SelfLink:"", UID:"471e63ba-4009-4390-becb-d3cf35fc95c6", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f867bfb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c", Pod:"calico-kube-controllers-5f867bfb44-djs5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50b68ae06a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.658 [INFO][5814] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.658 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" iface="eth0" netns="" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.658 [INFO][5814] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.658 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.736 [INFO][5836] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.737 [INFO][5836] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.739 [INFO][5836] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.771 [WARNING][5836] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.771 [INFO][5836] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.778 [INFO][5836] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:21.795669 containerd[2029]: 2026-01-23 23:57:21.782 [INFO][5814] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.797340 containerd[2029]: time="2026-01-23T23:57:21.797124078Z" level=info msg="TearDown network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" successfully" Jan 23 23:57:21.797340 containerd[2029]: time="2026-01-23T23:57:21.797254062Z" level=info msg="StopPodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" returns successfully" Jan 23 23:57:21.799311 containerd[2029]: time="2026-01-23T23:57:21.798760602Z" level=info msg="RemovePodSandbox for \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" Jan 23 23:57:21.799311 containerd[2029]: time="2026-01-23T23:57:21.798820842Z" level=info msg="Forcibly stopping sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\"" Jan 23 23:57:21.821120 containerd[2029]: time="2026-01-23T23:57:21.821060718Z" level=info msg="StartContainer for \"7cc520798d7c799f2d973018feae20b3f3ae0d6d0b7999b03255d77d47fedc47\" returns successfully" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.909 [WARNING][5871] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0", GenerateName:"calico-kube-controllers-5f867bfb44-", Namespace:"calico-system", SelfLink:"", UID:"471e63ba-4009-4390-becb-d3cf35fc95c6", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f867bfb44", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"818ba7a330aaf63e0b12bf738401b7dc87a3fb6bbbf471dd1b2fe367cb55115c", Pod:"calico-kube-controllers-5f867bfb44-djs5n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.18.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali50b68ae06a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.909 [INFO][5871] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.909 [INFO][5871] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" iface="eth0" netns="" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.909 [INFO][5871] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.909 [INFO][5871] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.959 [INFO][5881] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.959 [INFO][5881] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.959 [INFO][5881] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.972 [WARNING][5881] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.972 [INFO][5881] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" HandleID="k8s-pod-network.42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Workload="ip--172--31--20--253-k8s-calico--kube--controllers--5f867bfb44--djs5n-eth0" Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.975 [INFO][5881] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:21.982167 containerd[2029]: 2026-01-23 23:57:21.978 [INFO][5871] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115" Jan 23 23:57:21.982167 containerd[2029]: time="2026-01-23T23:57:21.981280795Z" level=info msg="TearDown network for sandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" successfully" Jan 23 23:57:21.989649 containerd[2029]: time="2026-01-23T23:57:21.989570683Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:21.990157 containerd[2029]: time="2026-01-23T23:57:21.989684851Z" level=info msg="RemovePodSandbox \"42a918a43d674dba35c3af070f4c9f0ecad506d60595e4c20f49979661dba115\" returns successfully" Jan 23 23:57:22.242755 systemd-networkd[1942]: calib3e29952470: Gained IPv6LL Jan 23 23:57:22.791426 kubelet[3407]: I0123 23:57:22.791317 3407 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-d4dkx" podStartSLOduration=59.791292547 podStartE2EDuration="59.791292547s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:22.790000459 +0000 UTC m=+65.012639700" watchObservedRunningTime="2026-01-23 23:57:22.791292547 +0000 UTC m=+65.013931764" Jan 23 23:57:22.945093 systemd[1]: Started sshd@9-172.31.20.253:22-4.153.228.146:40672.service - OpenSSH per-connection server daemon (4.153.228.146:40672). Jan 23 23:57:23.458414 sshd[5893]: Accepted publickey for core from 4.153.228.146 port 40672 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:23.461267 sshd[5893]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:23.469830 systemd-logind[2003]: New session 10 of user core. Jan 23 23:57:23.476761 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:57:23.950501 sshd[5893]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:23.955876 systemd[1]: sshd@9-172.31.20.253:22-4.153.228.146:40672.service: Deactivated successfully. Jan 23 23:57:23.960972 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:57:23.965542 systemd-logind[2003]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:57:23.967964 systemd-logind[2003]: Removed session 10. Jan 23 23:57:24.051707 systemd[1]: Started sshd@10-172.31.20.253:22-4.153.228.146:40674.service - OpenSSH per-connection server daemon (4.153.228.146:40674). Jan 23 23:57:24.529652 ntpd[1996]: Listen normally on 7 vxlan.calico 192.168.18.128:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 7 vxlan.calico 192.168.18.128:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 8 cali74a60d8b08b [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 9 vxlan.calico [fe80::6462:5aff:fe7e:6bb7%5]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 10 calic97b1363e84 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 11 cali50b68ae06a2 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 12 cali8093fbf5aaf [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 13 calidf0375cbfb5 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 14 cali9ded01ceede [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 15 cali8cf4b600d7a [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:57:24.530915 ntpd[1996]: 23 Jan 23:57:24 ntpd[1996]: Listen normally on 16 calib3e29952470 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:57:24.529781 ntpd[1996]: Listen normally on 8 cali74a60d8b08b [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:57:24.529864 ntpd[1996]: Listen normally on 9 vxlan.calico [fe80::6462:5aff:fe7e:6bb7%5]:123 Jan 23 23:57:24.529935 ntpd[1996]: Listen normally on 10 calic97b1363e84 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:24.530007 ntpd[1996]: Listen normally on 11 cali50b68ae06a2 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:57:24.530074 ntpd[1996]: Listen normally on 12 cali8093fbf5aaf [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:57:24.530141 ntpd[1996]: Listen normally on 13 calidf0375cbfb5 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:57:24.530208 ntpd[1996]: Listen normally on 14 cali9ded01ceede [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:57:24.530272 ntpd[1996]: Listen normally on 15 cali8cf4b600d7a [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:57:24.530368 ntpd[1996]: Listen normally on 16 calib3e29952470 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:57:24.551225 sshd[5906]: Accepted publickey for core from 4.153.228.146 port 40674 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:24.554003 sshd[5906]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:24.562874 systemd-logind[2003]: New session 11 of user core. Jan 23 23:57:24.567740 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:57:25.118578 sshd[5906]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:25.131662 systemd[1]: sshd@10-172.31.20.253:22-4.153.228.146:40674.service: Deactivated successfully. Jan 23 23:57:25.137958 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:57:25.141127 systemd-logind[2003]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:57:25.148117 systemd-logind[2003]: Removed session 11. Jan 23 23:57:25.215004 systemd[1]: Started sshd@11-172.31.20.253:22-4.153.228.146:48988.service - OpenSSH per-connection server daemon (4.153.228.146:48988). Jan 23 23:57:25.733425 sshd[5918]: Accepted publickey for core from 4.153.228.146 port 48988 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:25.736773 sshd[5918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:25.752766 systemd-logind[2003]: New session 12 of user core. Jan 23 23:57:25.760812 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:57:26.225254 sshd[5918]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:26.230571 systemd[1]: sshd@11-172.31.20.253:22-4.153.228.146:48988.service: Deactivated successfully. Jan 23 23:57:26.235898 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:57:26.239542 systemd-logind[2003]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:57:26.241981 systemd-logind[2003]: Removed session 12. Jan 23 23:57:28.044765 containerd[2029]: time="2026-01-23T23:57:28.044622369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:28.303753 containerd[2029]: time="2026-01-23T23:57:28.303379355Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:28.305836 containerd[2029]: time="2026-01-23T23:57:28.305629055Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:28.305836 containerd[2029]: time="2026-01-23T23:57:28.305673623Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:28.306020 kubelet[3407]: E0123 23:57:28.305971 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:28.307430 kubelet[3407]: E0123 23:57:28.306038 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:28.311828 kubelet[3407]: E0123 23:57:28.311706 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6be3eac514cecb383af9368500204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:28.314524 containerd[2029]: time="2026-01-23T23:57:28.314390651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:28.560932 containerd[2029]: time="2026-01-23T23:57:28.560766852Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:28.563079 containerd[2029]: time="2026-01-23T23:57:28.562949112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:28.563079 containerd[2029]: time="2026-01-23T23:57:28.563034432Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:28.563691 kubelet[3407]: E0123 23:57:28.563333 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:28.563691 kubelet[3407]: E0123 23:57:28.563396 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:28.563691 kubelet[3407]: E0123 23:57:28.563597 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:28.565240 kubelet[3407]: E0123 23:57:28.565152 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:31.042427 containerd[2029]: time="2026-01-23T23:57:31.041886828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:31.312224 containerd[2029]: time="2026-01-23T23:57:31.312092042Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:31.317232 containerd[2029]: time="2026-01-23T23:57:31.314916998Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:31.317232 containerd[2029]: time="2026-01-23T23:57:31.315066974Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:31.317486 kubelet[3407]: E0123 23:57:31.315267 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:31.317486 kubelet[3407]: E0123 23:57:31.315329 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:31.317486 kubelet[3407]: E0123 23:57:31.315548 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b545h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:31.319140 kubelet[3407]: E0123 23:57:31.317939 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:31.319098 systemd[1]: Started sshd@12-172.31.20.253:22-4.153.228.146:48996.service - OpenSSH per-connection server daemon (4.153.228.146:48996). Jan 23 23:57:31.824656 sshd[5946]: Accepted publickey for core from 4.153.228.146 port 48996 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:31.827623 sshd[5946]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:31.837101 systemd-logind[2003]: New session 13 of user core. Jan 23 23:57:31.842716 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:57:32.313966 sshd[5946]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:32.320146 systemd[1]: sshd@12-172.31.20.253:22-4.153.228.146:48996.service: Deactivated successfully. Jan 23 23:57:32.325249 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:57:32.328521 systemd-logind[2003]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:57:32.331094 systemd-logind[2003]: Removed session 13. Jan 23 23:57:33.042140 containerd[2029]: time="2026-01-23T23:57:33.041349674Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:33.346829 containerd[2029]: time="2026-01-23T23:57:33.346573156Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:33.349231 containerd[2029]: time="2026-01-23T23:57:33.349095784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:33.349231 containerd[2029]: time="2026-01-23T23:57:33.349169884Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:33.349648 kubelet[3407]: E0123 23:57:33.349593 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:33.352967 kubelet[3407]: E0123 23:57:33.349662 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:33.352967 kubelet[3407]: E0123 23:57:33.350040 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:33.353434 containerd[2029]: time="2026-01-23T23:57:33.351624928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:33.616154 containerd[2029]: time="2026-01-23T23:57:33.615992525Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:33.618244 containerd[2029]: time="2026-01-23T23:57:33.618174089Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:33.618381 containerd[2029]: time="2026-01-23T23:57:33.618307421Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:33.618723 kubelet[3407]: E0123 23:57:33.618648 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:33.619132 kubelet[3407]: E0123 23:57:33.618720 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:33.619132 kubelet[3407]: E0123 23:57:33.619010 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:33.620048 containerd[2029]: time="2026-01-23T23:57:33.619682153Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:33.621223 kubelet[3407]: E0123 23:57:33.620623 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:33.882771 containerd[2029]: time="2026-01-23T23:57:33.881938074Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:33.884232 containerd[2029]: time="2026-01-23T23:57:33.884152134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:33.884350 containerd[2029]: time="2026-01-23T23:57:33.884298150Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:33.884717 kubelet[3407]: E0123 23:57:33.884631 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:33.884863 kubelet[3407]: E0123 23:57:33.884716 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:33.885049 kubelet[3407]: E0123 23:57:33.884884 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:33.886421 kubelet[3407]: E0123 23:57:33.886336 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:34.043022 containerd[2029]: time="2026-01-23T23:57:34.042219675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:34.309089 containerd[2029]: time="2026-01-23T23:57:34.308844509Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:34.311229 containerd[2029]: time="2026-01-23T23:57:34.311084801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:34.311229 containerd[2029]: time="2026-01-23T23:57:34.311186417Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:34.311606 kubelet[3407]: E0123 23:57:34.311350 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:34.311606 kubelet[3407]: E0123 23:57:34.311410 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:34.311895 kubelet[3407]: E0123 23:57:34.311623 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clm5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:34.313117 kubelet[3407]: E0123 23:57:34.313033 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:36.044387 containerd[2029]: time="2026-01-23T23:57:36.043071605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:36.312928 containerd[2029]: time="2026-01-23T23:57:36.312570330Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:36.314853 containerd[2029]: time="2026-01-23T23:57:36.314783898Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:36.314978 containerd[2029]: time="2026-01-23T23:57:36.314916894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:36.315648 kubelet[3407]: E0123 23:57:36.315234 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:36.315648 kubelet[3407]: E0123 23:57:36.315297 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:36.315648 kubelet[3407]: E0123 23:57:36.315506 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:36.317533 kubelet[3407]: E0123 23:57:36.317441 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:37.421089 systemd[1]: Started sshd@13-172.31.20.253:22-4.153.228.146:52182.service - OpenSSH per-connection server daemon (4.153.228.146:52182). Jan 23 23:57:37.959119 sshd[5967]: Accepted publickey for core from 4.153.228.146 port 52182 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:37.961926 sshd[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:37.969505 systemd-logind[2003]: New session 14 of user core. Jan 23 23:57:37.977747 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:57:38.464386 sshd[5967]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:38.473907 systemd[1]: sshd@13-172.31.20.253:22-4.153.228.146:52182.service: Deactivated successfully. Jan 23 23:57:38.479658 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:57:38.481204 systemd-logind[2003]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:57:38.483384 systemd-logind[2003]: Removed session 14. Jan 23 23:57:39.042414 kubelet[3407]: E0123 23:57:39.042226 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:42.044093 kubelet[3407]: E0123 23:57:42.042992 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:43.561029 systemd[1]: Started sshd@14-172.31.20.253:22-4.153.228.146:52194.service - OpenSSH per-connection server daemon (4.153.228.146:52194). Jan 23 23:57:44.068924 sshd[6006]: Accepted publickey for core from 4.153.228.146 port 52194 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:44.071720 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:44.080534 systemd-logind[2003]: New session 15 of user core. Jan 23 23:57:44.088720 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:57:44.608685 sshd[6006]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:44.615719 systemd[1]: sshd@14-172.31.20.253:22-4.153.228.146:52194.service: Deactivated successfully. Jan 23 23:57:44.622654 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:57:44.632278 systemd-logind[2003]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:57:44.635098 systemd-logind[2003]: Removed session 15. Jan 23 23:57:45.041514 kubelet[3407]: E0123 23:57:45.041258 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:57:49.040069 kubelet[3407]: E0123 23:57:49.039684 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:57:49.046507 kubelet[3407]: E0123 23:57:49.043829 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:57:49.717963 systemd[1]: Started sshd@15-172.31.20.253:22-4.153.228.146:48850.service - OpenSSH per-connection server daemon (4.153.228.146:48850). Jan 23 23:57:50.284485 sshd[6022]: Accepted publickey for core from 4.153.228.146 port 48850 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:50.287688 sshd[6022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:50.304144 systemd-logind[2003]: New session 16 of user core. Jan 23 23:57:50.329792 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:57:50.903377 sshd[6022]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:50.911324 systemd[1]: sshd@15-172.31.20.253:22-4.153.228.146:48850.service: Deactivated successfully. Jan 23 23:57:50.920521 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:57:50.924398 systemd-logind[2003]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:57:50.927147 systemd-logind[2003]: Removed session 16. Jan 23 23:57:50.993954 systemd[1]: Started sshd@16-172.31.20.253:22-4.153.228.146:48852.service - OpenSSH per-connection server daemon (4.153.228.146:48852). Jan 23 23:57:51.039166 kubelet[3407]: E0123 23:57:51.039049 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:57:51.499509 sshd[6036]: Accepted publickey for core from 4.153.228.146 port 48852 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:51.503939 sshd[6036]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:51.520481 systemd-logind[2003]: New session 17 of user core. Jan 23 23:57:51.523771 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:57:52.307347 sshd[6036]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:52.316920 systemd-logind[2003]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:57:52.318311 systemd[1]: sshd@16-172.31.20.253:22-4.153.228.146:48852.service: Deactivated successfully. Jan 23 23:57:52.326172 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:57:52.330608 systemd-logind[2003]: Removed session 17. Jan 23 23:57:52.398346 systemd[1]: Started sshd@17-172.31.20.253:22-4.153.228.146:48864.service - OpenSSH per-connection server daemon (4.153.228.146:48864). Jan 23 23:57:52.903509 sshd[6048]: Accepted publickey for core from 4.153.228.146 port 48864 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:52.904869 sshd[6048]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:52.916590 systemd-logind[2003]: New session 18 of user core. Jan 23 23:57:52.925932 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:57:53.044796 containerd[2029]: time="2026-01-23T23:57:53.043835914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:53.377163 containerd[2029]: time="2026-01-23T23:57:53.366536819Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:53.377163 containerd[2029]: time="2026-01-23T23:57:53.368683751Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:53.377163 containerd[2029]: time="2026-01-23T23:57:53.368766995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:53.377163 containerd[2029]: time="2026-01-23T23:57:53.373180175Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:53.377559 kubelet[3407]: E0123 23:57:53.369041 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:53.377559 kubelet[3407]: E0123 23:57:53.369125 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:53.377559 kubelet[3407]: E0123 23:57:53.369392 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6be3eac514cecb383af9368500204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:53.665630 containerd[2029]: time="2026-01-23T23:57:53.665044117Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:53.667438 containerd[2029]: time="2026-01-23T23:57:53.667360477Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:53.667438 containerd[2029]: time="2026-01-23T23:57:53.667429513Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:53.668745 kubelet[3407]: E0123 23:57:53.667960 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:53.668745 kubelet[3407]: E0123 23:57:53.668029 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:53.668745 kubelet[3407]: E0123 23:57:53.668176 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:53.670108 kubelet[3407]: E0123 23:57:53.670006 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:57:54.644747 sshd[6048]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:54.655435 systemd[1]: sshd@17-172.31.20.253:22-4.153.228.146:48864.service: Deactivated successfully. Jan 23 23:57:54.667642 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:57:54.671420 systemd-logind[2003]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:57:54.674960 systemd-logind[2003]: Removed session 18. Jan 23 23:57:54.758722 systemd[1]: Started sshd@18-172.31.20.253:22-4.153.228.146:41856.service - OpenSSH per-connection server daemon (4.153.228.146:41856). Jan 23 23:57:55.321955 sshd[6066]: Accepted publickey for core from 4.153.228.146 port 41856 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:55.325326 sshd[6066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:55.335824 systemd-logind[2003]: New session 19 of user core. Jan 23 23:57:55.344552 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:57:56.312341 sshd[6066]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:56.320945 systemd-logind[2003]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:57:56.321893 systemd[1]: sshd@18-172.31.20.253:22-4.153.228.146:41856.service: Deactivated successfully. Jan 23 23:57:56.328649 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:57:56.336014 systemd-logind[2003]: Removed session 19. Jan 23 23:57:56.404672 systemd[1]: Started sshd@19-172.31.20.253:22-4.153.228.146:41870.service - OpenSSH per-connection server daemon (4.153.228.146:41870). Jan 23 23:57:56.920933 sshd[6085]: Accepted publickey for core from 4.153.228.146 port 41870 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:56.923489 sshd[6085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:56.938910 systemd-logind[2003]: New session 20 of user core. Jan 23 23:57:56.946269 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:57:57.040175 containerd[2029]: time="2026-01-23T23:57:57.040106953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:57.366716 containerd[2029]: time="2026-01-23T23:57:57.366643083Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:57.369027 containerd[2029]: time="2026-01-23T23:57:57.368848863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:57.369027 containerd[2029]: time="2026-01-23T23:57:57.368959203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:57.369345 kubelet[3407]: E0123 23:57:57.369213 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:57.369345 kubelet[3407]: E0123 23:57:57.369279 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:57.370129 kubelet[3407]: E0123 23:57:57.369491 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b545h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:57.371144 kubelet[3407]: E0123 23:57:57.371079 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:57:57.484803 sshd[6085]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:57.494772 systemd[1]: sshd@19-172.31.20.253:22-4.153.228.146:41870.service: Deactivated successfully. Jan 23 23:57:57.502604 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:57:57.510952 systemd-logind[2003]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:57:57.514664 systemd-logind[2003]: Removed session 20. Jan 23 23:57:59.040570 containerd[2029]: time="2026-01-23T23:57:59.040385319Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:59.303569 containerd[2029]: time="2026-01-23T23:57:59.302670629Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:59.305526 containerd[2029]: time="2026-01-23T23:57:59.305156285Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:59.305526 containerd[2029]: time="2026-01-23T23:57:59.305304569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:59.305810 kubelet[3407]: E0123 23:57:59.305670 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:59.305810 kubelet[3407]: E0123 23:57:59.305733 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:59.306402 kubelet[3407]: E0123 23:57:59.305910 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:59.307748 kubelet[3407]: E0123 23:57:59.307645 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:58:00.042859 containerd[2029]: time="2026-01-23T23:58:00.042788284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:00.351359 containerd[2029]: time="2026-01-23T23:58:00.351278034Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:00.353718 containerd[2029]: time="2026-01-23T23:58:00.353660262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:00.353926 containerd[2029]: time="2026-01-23T23:58:00.353553786Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:00.357102 kubelet[3407]: E0123 23:58:00.355211 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:00.357102 kubelet[3407]: E0123 23:58:00.355287 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:00.357102 kubelet[3407]: E0123 23:58:00.355502 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clm5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:00.358231 kubelet[3407]: E0123 23:58:00.358148 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:58:01.039577 containerd[2029]: time="2026-01-23T23:58:01.039515561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:01.305312 containerd[2029]: time="2026-01-23T23:58:01.305143279Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:01.309001 containerd[2029]: time="2026-01-23T23:58:01.307409791Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:01.309001 containerd[2029]: time="2026-01-23T23:58:01.307547287Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:01.309190 kubelet[3407]: E0123 23:58:01.307741 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:01.309190 kubelet[3407]: E0123 23:58:01.307801 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:01.309190 kubelet[3407]: E0123 23:58:01.307963 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:01.313336 containerd[2029]: time="2026-01-23T23:58:01.313188211Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:01.585606 containerd[2029]: time="2026-01-23T23:58:01.585538424Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:01.588067 containerd[2029]: time="2026-01-23T23:58:01.587878076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:01.588759 containerd[2029]: time="2026-01-23T23:58:01.587941988Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:01.588851 kubelet[3407]: E0123 23:58:01.588298 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:01.588851 kubelet[3407]: E0123 23:58:01.588373 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:01.590744 kubelet[3407]: E0123 23:58:01.590374 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:01.592497 kubelet[3407]: E0123 23:58:01.591750 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:58:02.591642 systemd[1]: Started sshd@20-172.31.20.253:22-4.153.228.146:41878.service - OpenSSH per-connection server daemon (4.153.228.146:41878). Jan 23 23:58:03.099888 sshd[6103]: Accepted publickey for core from 4.153.228.146 port 41878 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:03.103117 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:03.111086 systemd-logind[2003]: New session 21 of user core. Jan 23 23:58:03.123760 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:58:03.617375 sshd[6103]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:03.625329 systemd[1]: sshd@20-172.31.20.253:22-4.153.228.146:41878.service: Deactivated successfully. Jan 23 23:58:03.633160 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:58:03.635488 systemd-logind[2003]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:58:03.639934 systemd-logind[2003]: Removed session 21. Jan 23 23:58:06.056011 containerd[2029]: time="2026-01-23T23:58:06.055941862Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:06.058545 kubelet[3407]: E0123 23:58:06.058329 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:58:06.347187 containerd[2029]: time="2026-01-23T23:58:06.346943580Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:06.349552 containerd[2029]: time="2026-01-23T23:58:06.349298604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:06.349552 containerd[2029]: time="2026-01-23T23:58:06.349407144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:06.350856 kubelet[3407]: E0123 23:58:06.350518 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:06.350856 kubelet[3407]: E0123 23:58:06.350624 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:06.351248 kubelet[3407]: E0123 23:58:06.351093 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:06.352782 kubelet[3407]: E0123 23:58:06.352433 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:58:08.728668 systemd[1]: Started sshd@21-172.31.20.253:22-4.153.228.146:47916.service - OpenSSH per-connection server daemon (4.153.228.146:47916). Jan 23 23:58:09.276102 sshd[6115]: Accepted publickey for core from 4.153.228.146 port 47916 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:09.279154 sshd[6115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:09.291902 systemd-logind[2003]: New session 22 of user core. Jan 23 23:58:09.296109 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:58:09.834141 sshd[6115]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:09.843571 systemd-logind[2003]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:58:09.845115 systemd[1]: sshd@21-172.31.20.253:22-4.153.228.146:47916.service: Deactivated successfully. Jan 23 23:58:09.854794 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:58:09.858882 systemd-logind[2003]: Removed session 22. Jan 23 23:58:11.041040 kubelet[3407]: E0123 23:58:11.040028 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:58:14.048505 kubelet[3407]: E0123 23:58:14.048166 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:58:14.052058 kubelet[3407]: E0123 23:58:14.050103 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:58:14.052350 kubelet[3407]: E0123 23:58:14.050377 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:58:14.926927 systemd[1]: Started sshd@22-172.31.20.253:22-4.153.228.146:38668.service - OpenSSH per-connection server daemon (4.153.228.146:38668). Jan 23 23:58:15.436888 sshd[6150]: Accepted publickey for core from 4.153.228.146 port 38668 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:15.442606 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:15.452679 systemd-logind[2003]: New session 23 of user core. Jan 23 23:58:15.462546 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:58:16.006239 sshd[6150]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:16.014985 systemd[1]: sshd@22-172.31.20.253:22-4.153.228.146:38668.service: Deactivated successfully. Jan 23 23:58:16.022997 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:58:16.028855 systemd-logind[2003]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:58:16.033481 systemd-logind[2003]: Removed session 23. Jan 23 23:58:17.041227 kubelet[3407]: E0123 23:58:17.041109 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:58:18.048812 kubelet[3407]: E0123 23:58:18.048636 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:58:21.124799 systemd[1]: Started sshd@23-172.31.20.253:22-4.153.228.146:38672.service - OpenSSH per-connection server daemon (4.153.228.146:38672). Jan 23 23:58:21.671776 sshd[6164]: Accepted publickey for core from 4.153.228.146 port 38672 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:21.674870 sshd[6164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:21.684727 systemd-logind[2003]: New session 24 of user core. Jan 23 23:58:21.693771 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:58:22.000794 containerd[2029]: time="2026-01-23T23:58:21.998863037Z" level=info msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" Jan 23 23:58:22.063023 kubelet[3407]: E0123 23:58:22.059230 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.175 [WARNING][6182] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"761e0c97-a113-4485-8707-6df97f1eaf68", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25", Pod:"coredns-668d6bf9bc-d4dkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3e29952470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.176 [INFO][6182] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.176 [INFO][6182] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" iface="eth0" netns="" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.178 [INFO][6182] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.178 [INFO][6182] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.228 [INFO][6190] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.228 [INFO][6190] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.228 [INFO][6190] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.242 [WARNING][6190] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.243 [INFO][6190] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.245 [INFO][6190] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:22.255318 containerd[2029]: 2026-01-23 23:58:22.249 [INFO][6182] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.255318 containerd[2029]: time="2026-01-23T23:58:22.254609751Z" level=info msg="TearDown network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" successfully" Jan 23 23:58:22.255318 containerd[2029]: time="2026-01-23T23:58:22.254647419Z" level=info msg="StopPodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" returns successfully" Jan 23 23:58:22.260008 containerd[2029]: time="2026-01-23T23:58:22.258782079Z" level=info msg="RemovePodSandbox for \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" Jan 23 23:58:22.260008 containerd[2029]: time="2026-01-23T23:58:22.258837723Z" level=info msg="Forcibly stopping sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\"" Jan 23 23:58:22.271126 sshd[6164]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:22.282080 systemd[1]: sshd@23-172.31.20.253:22-4.153.228.146:38672.service: Deactivated successfully. Jan 23 23:58:22.292944 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:58:22.301536 systemd-logind[2003]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:58:22.305352 systemd-logind[2003]: Removed session 24. Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.392 [WARNING][6205] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"761e0c97-a113-4485-8707-6df97f1eaf68", ResourceVersion:"1164", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"20c5897ef9721ba8f8ecf304d74d56eb8706f019e2c9c7cebf13b9d1cb0f9f25", Pod:"coredns-668d6bf9bc-d4dkx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib3e29952470", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.394 [INFO][6205] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.394 [INFO][6205] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" iface="eth0" netns="" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.394 [INFO][6205] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.395 [INFO][6205] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.471 [INFO][6214] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.475 [INFO][6214] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.475 [INFO][6214] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.501 [WARNING][6214] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.501 [INFO][6214] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" HandleID="k8s-pod-network.f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--d4dkx-eth0" Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.504 [INFO][6214] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:22.513376 containerd[2029]: 2026-01-23 23:58:22.508 [INFO][6205] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5" Jan 23 23:58:22.515506 containerd[2029]: time="2026-01-23T23:58:22.513549148Z" level=info msg="TearDown network for sandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" successfully" Jan 23 23:58:22.525569 containerd[2029]: time="2026-01-23T23:58:22.525263452Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:22.525569 containerd[2029]: time="2026-01-23T23:58:22.525354376Z" level=info msg="RemovePodSandbox \"f0118cf99765c2c4750fdb6cf94d13879a8672cb9b7537e5bb30df0ce9e83ba5\" returns successfully" Jan 23 23:58:22.527788 containerd[2029]: time="2026-01-23T23:58:22.527713984Z" level=info msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.609 [WARNING][6228] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"532cc4d2-2f64-4521-88b0-26ef20fbd1cc", ResourceVersion:"1511", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa", Pod:"calico-apiserver-7fdd48bcf6-xgcqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf4b600d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.610 [INFO][6228] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.610 [INFO][6228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" iface="eth0" netns="" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.610 [INFO][6228] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.610 [INFO][6228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.648 [INFO][6235] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.649 [INFO][6235] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.649 [INFO][6235] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.662 [WARNING][6235] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.662 [INFO][6235] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.665 [INFO][6235] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:22.671183 containerd[2029]: 2026-01-23 23:58:22.667 [INFO][6228] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.671183 containerd[2029]: time="2026-01-23T23:58:22.670668737Z" level=info msg="TearDown network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" successfully" Jan 23 23:58:22.671183 containerd[2029]: time="2026-01-23T23:58:22.670706045Z" level=info msg="StopPodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" returns successfully" Jan 23 23:58:22.672359 containerd[2029]: time="2026-01-23T23:58:22.672091637Z" level=info msg="RemovePodSandbox for \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" Jan 23 23:58:22.672359 containerd[2029]: time="2026-01-23T23:58:22.672135653Z" level=info msg="Forcibly stopping sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\"" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.740 [WARNING][6249] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0", GenerateName:"calico-apiserver-7fdd48bcf6-", Namespace:"calico-apiserver", SelfLink:"", UID:"532cc4d2-2f64-4521-88b0-26ef20fbd1cc", ResourceVersion:"1511", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7fdd48bcf6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"40ba1d287be53ca7932eb8363da2aa3d8e76307e3349ea06b041a497d67042aa", Pod:"calico-apiserver-7fdd48bcf6-xgcqc", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.18.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8cf4b600d7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.740 [INFO][6249] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.740 [INFO][6249] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" iface="eth0" netns="" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.740 [INFO][6249] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.740 [INFO][6249] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.798 [INFO][6256] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.799 [INFO][6256] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.799 [INFO][6256] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.820 [WARNING][6256] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.820 [INFO][6256] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" HandleID="k8s-pod-network.256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Workload="ip--172--31--20--253-k8s-calico--apiserver--7fdd48bcf6--xgcqc-eth0" Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.823 [INFO][6256] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:22.831854 containerd[2029]: 2026-01-23 23:58:22.827 [INFO][6249] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044" Jan 23 23:58:22.832715 containerd[2029]: time="2026-01-23T23:58:22.831915558Z" level=info msg="TearDown network for sandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" successfully" Jan 23 23:58:22.841301 containerd[2029]: time="2026-01-23T23:58:22.841172250Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:22.841497 containerd[2029]: time="2026-01-23T23:58:22.841318698Z" level=info msg="RemovePodSandbox \"256e6ce394e56588f47664938d27ee461027494382f77f5ea31d4c3ecb25d044\" returns successfully" Jan 23 23:58:22.842651 containerd[2029]: time="2026-01-23T23:58:22.842125062Z" level=info msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.913 [WARNING][6270] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"116c2572-ef7b-49fd-a16b-25d6e19f65b8", ResourceVersion:"1482", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e", Pod:"csi-node-driver-wc9cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf0375cbfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.914 [INFO][6270] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.914 [INFO][6270] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" iface="eth0" netns="" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.914 [INFO][6270] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.914 [INFO][6270] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.961 [INFO][6277] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.962 [INFO][6277] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.962 [INFO][6277] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.998 [WARNING][6277] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:22.998 [INFO][6277] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:23.002 [INFO][6277] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:23.011017 containerd[2029]: 2026-01-23 23:58:23.006 [INFO][6270] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.013867 containerd[2029]: time="2026-01-23T23:58:23.011011610Z" level=info msg="TearDown network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" successfully" Jan 23 23:58:23.013867 containerd[2029]: time="2026-01-23T23:58:23.011052854Z" level=info msg="StopPodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" returns successfully" Jan 23 23:58:23.013867 containerd[2029]: time="2026-01-23T23:58:23.012956798Z" level=info msg="RemovePodSandbox for \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" Jan 23 23:58:23.013867 containerd[2029]: time="2026-01-23T23:58:23.013038062Z" level=info msg="Forcibly stopping sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\"" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.095 [WARNING][6291] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"116c2572-ef7b-49fd-a16b-25d6e19f65b8", ResourceVersion:"1482", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"4bb2b7b5b28c295789d839e7e0f2b7975faebec1fad18bf5c1e5d571ee9d915e", Pod:"csi-node-driver-wc9cs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.18.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calidf0375cbfb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.095 [INFO][6291] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.095 [INFO][6291] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" iface="eth0" netns="" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.095 [INFO][6291] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.095 [INFO][6291] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.142 [INFO][6298] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.143 [INFO][6298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.143 [INFO][6298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.156 [WARNING][6298] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.156 [INFO][6298] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" HandleID="k8s-pod-network.c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Workload="ip--172--31--20--253-k8s-csi--node--driver--wc9cs-eth0" Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.159 [INFO][6298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:23.168391 containerd[2029]: 2026-01-23 23:58:23.162 [INFO][6291] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1" Jan 23 23:58:23.173656 containerd[2029]: time="2026-01-23T23:58:23.169372227Z" level=info msg="TearDown network for sandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" successfully" Jan 23 23:58:23.184933 containerd[2029]: time="2026-01-23T23:58:23.183973407Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:23.184933 containerd[2029]: time="2026-01-23T23:58:23.184224087Z" level=info msg="RemovePodSandbox \"c06f5af64cea19dbef66d7d38d8897145f3c2cabc19e7ebd3b85af99b8843dc1\" returns successfully" Jan 23 23:58:23.186892 containerd[2029]: time="2026-01-23T23:58:23.185270763Z" level=info msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.256 [WARNING][6312] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d79f639b-89ce-4a3e-898f-c563a6cc1a21", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49", Pod:"coredns-668d6bf9bc-xljwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ded01ceede", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.257 [INFO][6312] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.257 [INFO][6312] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" iface="eth0" netns="" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.257 [INFO][6312] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.257 [INFO][6312] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.303 [INFO][6319] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.305 [INFO][6319] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.305 [INFO][6319] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.317 [WARNING][6319] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.317 [INFO][6319] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.320 [INFO][6319] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:23.329380 containerd[2029]: 2026-01-23 23:58:23.323 [INFO][6312] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.331148 containerd[2029]: time="2026-01-23T23:58:23.329420644Z" level=info msg="TearDown network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" successfully" Jan 23 23:58:23.331148 containerd[2029]: time="2026-01-23T23:58:23.329517496Z" level=info msg="StopPodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" returns successfully" Jan 23 23:58:23.332562 containerd[2029]: time="2026-01-23T23:58:23.331963180Z" level=info msg="RemovePodSandbox for \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" Jan 23 23:58:23.332562 containerd[2029]: time="2026-01-23T23:58:23.332019292Z" level=info msg="Forcibly stopping sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\"" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.408 [WARNING][6333] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d79f639b-89ce-4a3e-898f-c563a6cc1a21", ResourceVersion:"1138", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-20-253", ContainerID:"cd4365d505e72106a55956c35d2f56283f7f8cde096038128cc67af62c255f49", Pod:"coredns-668d6bf9bc-xljwn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.18.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9ded01ceede", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.409 [INFO][6333] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.409 [INFO][6333] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" iface="eth0" netns="" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.409 [INFO][6333] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.409 [INFO][6333] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.452 [INFO][6340] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.452 [INFO][6340] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.452 [INFO][6340] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.473 [WARNING][6340] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.473 [INFO][6340] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" HandleID="k8s-pod-network.fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Workload="ip--172--31--20--253-k8s-coredns--668d6bf9bc--xljwn-eth0" Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.476 [INFO][6340] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:23.486606 containerd[2029]: 2026-01-23 23:58:23.482 [INFO][6333] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7" Jan 23 23:58:23.486606 containerd[2029]: time="2026-01-23T23:58:23.486570509Z" level=info msg="TearDown network for sandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" successfully" Jan 23 23:58:23.497084 containerd[2029]: time="2026-01-23T23:58:23.496992449Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:23.497244 containerd[2029]: time="2026-01-23T23:58:23.497086361Z" level=info msg="RemovePodSandbox \"fc0d3ecf913c31bcd903a2ad487bec60f7abd9883d3177c00592df45186eddf7\" returns successfully" Jan 23 23:58:26.049630 kubelet[3407]: E0123 23:58:26.049126 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:58:27.042383 kubelet[3407]: E0123 23:58:27.042303 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:58:27.368937 systemd[1]: Started sshd@24-172.31.20.253:22-4.153.228.146:56736.service - OpenSSH per-connection server daemon (4.153.228.146:56736). Jan 23 23:58:27.882342 sshd[6350]: Accepted publickey for core from 4.153.228.146 port 56736 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:27.886295 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:27.898687 systemd-logind[2003]: New session 25 of user core. Jan 23 23:58:27.906752 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:28.048421 kubelet[3407]: E0123 23:58:28.047941 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:58:28.400899 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:28.409384 systemd[1]: sshd@24-172.31.20.253:22-4.153.228.146:56736.service: Deactivated successfully. Jan 23 23:58:28.414682 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:58:28.416695 systemd-logind[2003]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:58:28.420231 systemd-logind[2003]: Removed session 25. Jan 23 23:58:29.040092 kubelet[3407]: E0123 23:58:29.040017 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:58:31.040149 kubelet[3407]: E0123 23:58:31.039101 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:58:33.498095 systemd[1]: Started sshd@25-172.31.20.253:22-4.153.228.146:56740.service - OpenSSH per-connection server daemon (4.153.228.146:56740). Jan 23 23:58:34.009288 sshd[6363]: Accepted publickey for core from 4.153.228.146 port 56740 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:34.011201 sshd[6363]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:34.024020 systemd-logind[2003]: New session 26 of user core. Jan 23 23:58:34.032805 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 23 23:58:34.044883 kubelet[3407]: E0123 23:58:34.044722 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:58:34.498996 sshd[6363]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:34.508432 systemd[1]: sshd@25-172.31.20.253:22-4.153.228.146:56740.service: Deactivated successfully. Jan 23 23:58:34.516061 systemd[1]: session-26.scope: Deactivated successfully. Jan 23 23:58:34.518565 systemd-logind[2003]: Session 26 logged out. Waiting for processes to exit. Jan 23 23:58:34.520634 systemd-logind[2003]: Removed session 26. Jan 23 23:58:39.038782 kubelet[3407]: E0123 23:58:39.038716 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:58:40.046367 kubelet[3407]: E0123 23:58:40.046317 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:58:42.043377 containerd[2029]: time="2026-01-23T23:58:42.043248849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:42.322145 containerd[2029]: time="2026-01-23T23:58:42.322074838Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:42.324320 containerd[2029]: time="2026-01-23T23:58:42.324258586Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:42.324482 containerd[2029]: time="2026-01-23T23:58:42.324406894Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:42.325119 kubelet[3407]: E0123 23:58:42.324766 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:42.325119 kubelet[3407]: E0123 23:58:42.324834 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:42.326022 kubelet[3407]: E0123 23:58:42.325125 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c5e6be3eac514cecb383af9368500204,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:42.326794 containerd[2029]: time="2026-01-23T23:58:42.326333506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:42.601477 containerd[2029]: time="2026-01-23T23:58:42.601288536Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:42.603746 containerd[2029]: time="2026-01-23T23:58:42.603596268Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:42.603746 containerd[2029]: time="2026-01-23T23:58:42.603681852Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:42.604003 kubelet[3407]: E0123 23:58:42.603880 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:42.604003 kubelet[3407]: E0123 23:58:42.603939 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:42.604725 kubelet[3407]: E0123 23:58:42.604203 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:42.604930 containerd[2029]: time="2026-01-23T23:58:42.604543140Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:42.893771 containerd[2029]: time="2026-01-23T23:58:42.893614765Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:42.896464 containerd[2029]: time="2026-01-23T23:58:42.896349841Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:42.896659 containerd[2029]: time="2026-01-23T23:58:42.896566081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:42.896924 kubelet[3407]: E0123 23:58:42.896849 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:42.897014 kubelet[3407]: E0123 23:58:42.896925 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:42.897284 kubelet[3407]: E0123 23:58:42.897198 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jp99b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7844db9b64-8ffcr_calico-system(93ccd330-a859-4470-8f8e-396ff6ffb624): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:42.897909 containerd[2029]: time="2026-01-23T23:58:42.897805069Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:42.899144 kubelet[3407]: E0123 23:58:42.898923 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:58:43.177972 containerd[2029]: time="2026-01-23T23:58:43.177274187Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:43.179509 containerd[2029]: time="2026-01-23T23:58:43.179426519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:43.179653 containerd[2029]: time="2026-01-23T23:58:43.179589827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:43.179821 kubelet[3407]: E0123 23:58:43.179769 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:43.179933 kubelet[3407]: E0123 23:58:43.179838 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:43.180064 kubelet[3407]: E0123 23:58:43.179996 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-29xwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-wc9cs_calico-system(116c2572-ef7b-49fd-a16b-25d6e19f65b8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:43.181290 kubelet[3407]: E0123 23:58:43.181203 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:58:45.039787 kubelet[3407]: E0123 23:58:45.039709 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:58:48.044577 containerd[2029]: time="2026-01-23T23:58:48.043875459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:48.334327 containerd[2029]: time="2026-01-23T23:58:48.334248868Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:48.336677 containerd[2029]: time="2026-01-23T23:58:48.336600832Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:48.336813 containerd[2029]: time="2026-01-23T23:58:48.336742264Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:48.337115 kubelet[3407]: E0123 23:58:48.337036 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:48.337872 kubelet[3407]: E0123 23:58:48.337109 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:48.337872 kubelet[3407]: E0123 23:58:48.337296 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-b545h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-n88dg_calico-apiserver(17163695-eef5-4bf6-be5b-0d305316c85b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:48.338700 kubelet[3407]: E0123 23:58:48.338608 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:58:48.837740 systemd[1]: cri-containerd-c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3.scope: Deactivated successfully. Jan 23 23:58:48.838209 systemd[1]: cri-containerd-c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3.scope: Consumed 6.195s CPU time, 18.1M memory peak, 0B memory swap peak. Jan 23 23:58:48.881290 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3-rootfs.mount: Deactivated successfully. Jan 23 23:58:48.896101 containerd[2029]: time="2026-01-23T23:58:48.895866691Z" level=info msg="shim disconnected" id=c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3 namespace=k8s.io Jan 23 23:58:48.896101 containerd[2029]: time="2026-01-23T23:58:48.896045407Z" level=warning msg="cleaning up after shim disconnected" id=c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3 namespace=k8s.io Jan 23 23:58:48.896735 containerd[2029]: time="2026-01-23T23:58:48.896065711Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:49.069414 kubelet[3407]: I0123 23:58:49.069344 3407 scope.go:117] "RemoveContainer" containerID="c14abaf93b99505928d4de794f671f60cd734053ea8af6363efd8c9b6c770dd3" Jan 23 23:58:49.072544 containerd[2029]: time="2026-01-23T23:58:49.072361888Z" level=info msg="CreateContainer within sandbox \"5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:58:49.099751 containerd[2029]: time="2026-01-23T23:58:49.099495652Z" level=info msg="CreateContainer within sandbox \"5bb90bf1e2dc65d584cc85c480c195649d838b98907ecf17aa3bc6e49f6d7fb4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"82225e0b9597cebd4d60c99662c0067385a4be0cac57aa2a7ce771b84b7c7d50\"" Jan 23 23:58:49.101688 containerd[2029]: time="2026-01-23T23:58:49.100627252Z" level=info msg="StartContainer for \"82225e0b9597cebd4d60c99662c0067385a4be0cac57aa2a7ce771b84b7c7d50\"" Jan 23 23:58:49.160819 systemd[1]: Started cri-containerd-82225e0b9597cebd4d60c99662c0067385a4be0cac57aa2a7ce771b84b7c7d50.scope - libcontainer container 82225e0b9597cebd4d60c99662c0067385a4be0cac57aa2a7ce771b84b7c7d50. Jan 23 23:58:49.252029 containerd[2029]: time="2026-01-23T23:58:49.251951753Z" level=info msg="StartContainer for \"82225e0b9597cebd4d60c99662c0067385a4be0cac57aa2a7ce771b84b7c7d50\" returns successfully" Jan 23 23:58:49.778291 systemd[1]: cri-containerd-dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297.scope: Deactivated successfully. Jan 23 23:58:49.779347 systemd[1]: cri-containerd-dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297.scope: Consumed 36.015s CPU time. Jan 23 23:58:49.832901 containerd[2029]: time="2026-01-23T23:58:49.832811432Z" level=info msg="shim disconnected" id=dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297 namespace=k8s.io Jan 23 23:58:49.832901 containerd[2029]: time="2026-01-23T23:58:49.832889984Z" level=warning msg="cleaning up after shim disconnected" id=dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297 namespace=k8s.io Jan 23 23:58:49.833226 containerd[2029]: time="2026-01-23T23:58:49.832912148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:49.880571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297-rootfs.mount: Deactivated successfully. Jan 23 23:58:50.078810 kubelet[3407]: I0123 23:58:50.078761 3407 scope.go:117] "RemoveContainer" containerID="dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297" Jan 23 23:58:50.082040 containerd[2029]: time="2026-01-23T23:58:50.081954353Z" level=info msg="CreateContainer within sandbox \"09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 23:58:50.108581 containerd[2029]: time="2026-01-23T23:58:50.108373145Z" level=info msg="CreateContainer within sandbox \"09dede72d0937a584f4ea21204fc80b10c286a79d0e3855bda4f503dc715b5f0\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965\"" Jan 23 23:58:50.110518 containerd[2029]: time="2026-01-23T23:58:50.109631585Z" level=info msg="StartContainer for \"ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965\"" Jan 23 23:58:50.121034 kubelet[3407]: E0123 23:58:50.120112 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:58:50.187779 systemd[1]: Started cri-containerd-ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965.scope - libcontainer container ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965. Jan 23 23:58:50.238246 containerd[2029]: time="2026-01-23T23:58:50.238161402Z" level=info msg="StartContainer for \"ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965\" returns successfully" Jan 23 23:58:51.041500 containerd[2029]: time="2026-01-23T23:58:51.040546938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:51.320284 containerd[2029]: time="2026-01-23T23:58:51.320178247Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:51.322490 containerd[2029]: time="2026-01-23T23:58:51.322401439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:51.322654 containerd[2029]: time="2026-01-23T23:58:51.322569907Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:51.324838 kubelet[3407]: E0123 23:58:51.324755 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:51.325437 kubelet[3407]: E0123 23:58:51.324834 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:51.325437 kubelet[3407]: E0123 23:58:51.325142 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clm5g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-mmfhn_calico-system(046ae13d-0e4a-437d-9371-4ba65edfa713): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:51.325881 containerd[2029]: time="2026-01-23T23:58:51.325611595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:51.327322 kubelet[3407]: E0123 23:58:51.327253 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:58:51.648974 containerd[2029]: time="2026-01-23T23:58:51.648811281Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:51.651250 containerd[2029]: time="2026-01-23T23:58:51.651128913Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:51.651250 containerd[2029]: time="2026-01-23T23:58:51.651182589Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:51.652014 kubelet[3407]: E0123 23:58:51.651667 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:51.652014 kubelet[3407]: E0123 23:58:51.651735 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:51.652293 kubelet[3407]: E0123 23:58:51.651907 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzbwp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-5f867bfb44-djs5n_calico-system(471e63ba-4009-4390-becb-d3cf35fc95c6): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:51.653532 kubelet[3407]: E0123 23:58:51.653477 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:58:54.713964 systemd[1]: cri-containerd-e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50.scope: Deactivated successfully. Jan 23 23:58:54.714969 systemd[1]: cri-containerd-e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50.scope: Consumed 5.777s CPU time, 16.1M memory peak, 0B memory swap peak. Jan 23 23:58:54.762836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50-rootfs.mount: Deactivated successfully. Jan 23 23:58:54.778005 containerd[2029]: time="2026-01-23T23:58:54.777666516Z" level=info msg="shim disconnected" id=e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50 namespace=k8s.io Jan 23 23:58:54.778005 containerd[2029]: time="2026-01-23T23:58:54.777740136Z" level=warning msg="cleaning up after shim disconnected" id=e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50 namespace=k8s.io Jan 23 23:58:54.778005 containerd[2029]: time="2026-01-23T23:58:54.777759720Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:55.096587 kubelet[3407]: I0123 23:58:55.096543 3407 scope.go:117] "RemoveContainer" containerID="e82b863e34f01ba24045b2a816789424436bc6f718339703c878a30034186c50" Jan 23 23:58:55.100118 containerd[2029]: time="2026-01-23T23:58:55.099680878Z" level=info msg="CreateContainer within sandbox \"b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:58:55.127490 containerd[2029]: time="2026-01-23T23:58:55.127398562Z" level=info msg="CreateContainer within sandbox \"b1cb250d7e28ecd3f9a6be3286e5c0559cd90ffb339dd0975c22df8e457a049f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"cd8e97783ad0ed95c9a7df56ab524e222e10dacc726eff4782645e456d544af4\"" Jan 23 23:58:55.128129 containerd[2029]: time="2026-01-23T23:58:55.128065282Z" level=info msg="StartContainer for \"cd8e97783ad0ed95c9a7df56ab524e222e10dacc726eff4782645e456d544af4\"" Jan 23 23:58:55.190802 systemd[1]: Started cri-containerd-cd8e97783ad0ed95c9a7df56ab524e222e10dacc726eff4782645e456d544af4.scope - libcontainer container cd8e97783ad0ed95c9a7df56ab524e222e10dacc726eff4782645e456d544af4. Jan 23 23:58:55.268073 containerd[2029]: time="2026-01-23T23:58:55.267970571Z" level=info msg="StartContainer for \"cd8e97783ad0ed95c9a7df56ab524e222e10dacc726eff4782645e456d544af4\" returns successfully" Jan 23 23:58:58.044925 containerd[2029]: time="2026-01-23T23:58:58.043049976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:58.046072 kubelet[3407]: E0123 23:58:58.045995 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:58:58.046668 kubelet[3407]: E0123 23:58:58.046351 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7844db9b64-8ffcr" podUID="93ccd330-a859-4470-8f8e-396ff6ffb624" Jan 23 23:58:58.289180 containerd[2029]: time="2026-01-23T23:58:58.289111142Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:58.292470 containerd[2029]: time="2026-01-23T23:58:58.291426122Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:58.292470 containerd[2029]: time="2026-01-23T23:58:58.291495194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:58.292666 kubelet[3407]: E0123 23:58:58.292081 3407 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:58.292666 kubelet[3407]: E0123 23:58:58.292155 3407 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:58.292666 kubelet[3407]: E0123 23:58:58.292318 3407 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6q6pw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-7fdd48bcf6-xgcqc_calico-apiserver(532cc4d2-2f64-4521-88b0-26ef20fbd1cc): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:58.293666 kubelet[3407]: E0123 23:58:58.293541 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:59:00.039653 kubelet[3407]: E0123 23:59:00.039507 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-n88dg" podUID="17163695-eef5-4bf6-be5b-0d305316c85b" Jan 23 23:59:00.125005 kubelet[3407]: E0123 23:59:00.124683 3407 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ip-172-31-20-253)" Jan 23 23:59:01.636557 systemd[1]: cri-containerd-ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965.scope: Deactivated successfully. Jan 23 23:59:01.675923 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965-rootfs.mount: Deactivated successfully. Jan 23 23:59:01.686263 containerd[2029]: time="2026-01-23T23:59:01.686173123Z" level=info msg="shim disconnected" id=ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965 namespace=k8s.io Jan 23 23:59:01.686263 containerd[2029]: time="2026-01-23T23:59:01.686250007Z" level=warning msg="cleaning up after shim disconnected" id=ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965 namespace=k8s.io Jan 23 23:59:01.686977 containerd[2029]: time="2026-01-23T23:59:01.686271691Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:59:02.124956 kubelet[3407]: I0123 23:59:02.124894 3407 scope.go:117] "RemoveContainer" containerID="dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297" Jan 23 23:59:02.125615 kubelet[3407]: I0123 23:59:02.125343 3407 scope.go:117] "RemoveContainer" containerID="ea90821a24ae851f8c95cfabde807df2ab28a2e9b81fb4726ebb01978b0fc965" Jan 23 23:59:02.125615 kubelet[3407]: E0123 23:59:02.125580 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-jqjdf_tigera-operator(baf4e5ea-03e1-4719-9f0f-9fd1f8521b40)\"" pod="tigera-operator/tigera-operator-7dcd859c48-jqjdf" podUID="baf4e5ea-03e1-4719-9f0f-9fd1f8521b40" Jan 23 23:59:02.128587 containerd[2029]: time="2026-01-23T23:59:02.128490869Z" level=info msg="RemoveContainer for \"dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297\"" Jan 23 23:59:02.135722 containerd[2029]: time="2026-01-23T23:59:02.135625589Z" level=info msg="RemoveContainer for \"dfab10117e3961870367fde734f67dfe8b70be02b946882b857c869aa55aa297\" returns successfully" Jan 23 23:59:03.039161 kubelet[3407]: E0123 23:59:03.038973 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-5f867bfb44-djs5n" podUID="471e63ba-4009-4390-becb-d3cf35fc95c6" Jan 23 23:59:05.039143 kubelet[3407]: E0123 23:59:05.039062 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-mmfhn" podUID="046ae13d-0e4a-437d-9371-4ba65edfa713" Jan 23 23:59:10.039803 kubelet[3407]: E0123 23:59:10.039654 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-7fdd48bcf6-xgcqc" podUID="532cc4d2-2f64-4521-88b0-26ef20fbd1cc" Jan 23 23:59:10.041551 kubelet[3407]: E0123 23:59:10.041422 3407 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-wc9cs" podUID="116c2572-ef7b-49fd-a16b-25d6e19f65b8" Jan 23 23:59:10.126179 kubelet[3407]: E0123 23:59:10.125085 3407 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.253:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-253?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)"