Jan 23 23:55:18.243191 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:55:18.243245 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:55:18.243274 kernel: KASLR disabled due to lack of seed Jan 23 23:55:18.243293 kernel: efi: EFI v2.7 by EDK II Jan 23 23:55:18.243377 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:55:18.243397 kernel: ACPI: Early table checksum verification disabled Jan 23 23:55:18.243417 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:55:18.243434 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:55:18.243453 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:55:18.243470 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:55:18.243498 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:55:18.243515 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:55:18.243531 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:55:18.243549 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:55:18.243568 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:55:18.243592 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:55:18.243610 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:55:18.243627 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:55:18.243645 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:55:18.243663 kernel: printk: bootconsole [uart0] enabled Jan 23 23:55:18.243681 kernel: NUMA: Failed to initialise from firmware Jan 23 23:55:18.243699 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:18.243717 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:55:18.243735 kernel: Zone ranges: Jan 23 23:55:18.243753 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:55:18.243771 kernel: DMA32 empty Jan 23 23:55:18.243792 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:55:18.243810 kernel: Movable zone start for each node Jan 23 23:55:18.243827 kernel: Early memory node ranges Jan 23 23:55:18.243844 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:55:18.243862 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:55:18.243879 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:55:18.243896 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:55:18.243914 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:55:18.243932 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:55:18.243948 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:55:18.243965 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:55:18.243982 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:55:18.244003 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:55:18.244021 kernel: psci: probing for conduit method from ACPI. Jan 23 23:55:18.244046 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:55:18.244064 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:55:18.244083 kernel: psci: Trusted OS migration not required Jan 23 23:55:18.244106 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:55:18.244125 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:55:18.244144 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:55:18.244162 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:55:18.244182 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:55:18.244199 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:55:18.244217 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:55:18.244234 kernel: CPU features: detected: Spectre-v2 Jan 23 23:55:18.244251 kernel: CPU features: detected: Spectre-v3a Jan 23 23:55:18.244268 kernel: CPU features: detected: Spectre-BHB Jan 23 23:55:18.244286 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:55:18.246391 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:55:18.246422 kernel: alternatives: applying boot alternatives Jan 23 23:55:18.246444 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:18.246494 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:55:18.246514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:55:18.246532 kernel: Fallback order for Node 0: 0 Jan 23 23:55:18.246550 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:55:18.246568 kernel: Policy zone: Normal Jan 23 23:55:18.246586 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:55:18.246604 kernel: software IO TLB: area num 2. Jan 23 23:55:18.246622 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:55:18.246673 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:55:18.246694 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:55:18.246713 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:55:18.246733 kernel: rcu: RCU event tracing is enabled. Jan 23 23:55:18.246751 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:55:18.246770 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:55:18.246789 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:55:18.246808 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:55:18.246826 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:55:18.246843 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:55:18.246861 kernel: GICv3: 96 SPIs implemented Jan 23 23:55:18.246884 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:55:18.246902 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:55:18.246920 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:55:18.246937 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:55:18.246954 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:55:18.246972 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:55:18.246991 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:55:18.247008 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:55:18.247027 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:55:18.247046 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:55:18.247065 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:55:18.247084 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:55:18.247108 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:55:18.247127 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:55:18.247146 kernel: Console: colour dummy device 80x25 Jan 23 23:55:18.247165 kernel: printk: console [tty1] enabled Jan 23 23:55:18.247183 kernel: ACPI: Core revision 20230628 Jan 23 23:55:18.247203 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:55:18.247221 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:55:18.247240 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:55:18.247259 kernel: landlock: Up and running. Jan 23 23:55:18.247284 kernel: SELinux: Initializing. Jan 23 23:55:18.248399 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:18.248440 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:55:18.248459 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:18.248478 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:55:18.248496 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:55:18.248516 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:55:18.248534 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:55:18.248552 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:55:18.248580 kernel: Remapping and enabling EFI services. Jan 23 23:55:18.248598 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:55:18.248616 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:55:18.248634 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:55:18.248652 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:55:18.248669 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:55:18.248687 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:55:18.248705 kernel: SMP: Total of 2 processors activated. Jan 23 23:55:18.248722 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:55:18.248744 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:55:18.248763 kernel: CPU features: detected: CRC32 instructions Jan 23 23:55:18.248781 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:55:18.248810 kernel: alternatives: applying system-wide alternatives Jan 23 23:55:18.248833 kernel: devtmpfs: initialized Jan 23 23:55:18.248852 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:55:18.248870 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:55:18.248889 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:55:18.248908 kernel: SMBIOS 3.0.0 present. Jan 23 23:55:18.248932 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:55:18.248950 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:55:18.248969 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:55:18.248988 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:55:18.249006 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:55:18.249026 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:55:18.249045 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jan 23 23:55:18.249065 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:55:18.249089 kernel: cpuidle: using governor menu Jan 23 23:55:18.249109 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:55:18.249128 kernel: ASID allocator initialised with 65536 entries Jan 23 23:55:18.249147 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:55:18.249165 kernel: Serial: AMBA PL011 UART driver Jan 23 23:55:18.249184 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:55:18.249203 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:55:18.249221 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:55:18.249243 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:55:18.249266 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:55:18.249285 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:55:18.250394 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:55:18.250426 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:55:18.250445 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:55:18.250464 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:55:18.250484 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:55:18.250503 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:55:18.250522 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:55:18.250564 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:55:18.250586 kernel: ACPI: Interpreter enabled Jan 23 23:55:18.250606 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:55:18.250626 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:55:18.250664 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:55:18.250978 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:55:18.251198 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:55:18.253533 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:55:18.253776 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:55:18.253992 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:55:18.254019 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:55:18.254039 kernel: acpiphp: Slot [1] registered Jan 23 23:55:18.254058 kernel: acpiphp: Slot [2] registered Jan 23 23:55:18.254077 kernel: acpiphp: Slot [3] registered Jan 23 23:55:18.254095 kernel: acpiphp: Slot [4] registered Jan 23 23:55:18.254114 kernel: acpiphp: Slot [5] registered Jan 23 23:55:18.254139 kernel: acpiphp: Slot [6] registered Jan 23 23:55:18.254158 kernel: acpiphp: Slot [7] registered Jan 23 23:55:18.254177 kernel: acpiphp: Slot [8] registered Jan 23 23:55:18.254195 kernel: acpiphp: Slot [9] registered Jan 23 23:55:18.254213 kernel: acpiphp: Slot [10] registered Jan 23 23:55:18.254232 kernel: acpiphp: Slot [11] registered Jan 23 23:55:18.254251 kernel: acpiphp: Slot [12] registered Jan 23 23:55:18.254269 kernel: acpiphp: Slot [13] registered Jan 23 23:55:18.254288 kernel: acpiphp: Slot [14] registered Jan 23 23:55:18.256291 kernel: acpiphp: Slot [15] registered Jan 23 23:55:18.256368 kernel: acpiphp: Slot [16] registered Jan 23 23:55:18.256389 kernel: acpiphp: Slot [17] registered Jan 23 23:55:18.256408 kernel: acpiphp: Slot [18] registered Jan 23 23:55:18.256426 kernel: acpiphp: Slot [19] registered Jan 23 23:55:18.256445 kernel: acpiphp: Slot [20] registered Jan 23 23:55:18.256464 kernel: acpiphp: Slot [21] registered Jan 23 23:55:18.256482 kernel: acpiphp: Slot [22] registered Jan 23 23:55:18.256502 kernel: acpiphp: Slot [23] registered Jan 23 23:55:18.256520 kernel: acpiphp: Slot [24] registered Jan 23 23:55:18.256545 kernel: acpiphp: Slot [25] registered Jan 23 23:55:18.256565 kernel: acpiphp: Slot [26] registered Jan 23 23:55:18.256584 kernel: acpiphp: Slot [27] registered Jan 23 23:55:18.256604 kernel: acpiphp: Slot [28] registered Jan 23 23:55:18.256623 kernel: acpiphp: Slot [29] registered Jan 23 23:55:18.256641 kernel: acpiphp: Slot [30] registered Jan 23 23:55:18.256660 kernel: acpiphp: Slot [31] registered Jan 23 23:55:18.256679 kernel: PCI host bridge to bus 0000:00 Jan 23 23:55:18.256960 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:55:18.257263 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:55:18.262759 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:18.262981 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:55:18.263222 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:55:18.264634 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:55:18.264879 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:55:18.265117 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:55:18.265369 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:55:18.265588 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:18.265821 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:55:18.266029 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:55:18.266233 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:55:18.267538 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:55:18.267782 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:55:18.267988 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:55:18.268181 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:55:18.269399 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:55:18.269436 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:55:18.269457 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:55:18.269476 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:55:18.269495 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:55:18.269522 kernel: iommu: Default domain type: Translated Jan 23 23:55:18.269541 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:55:18.269560 kernel: efivars: Registered efivars operations Jan 23 23:55:18.269578 kernel: vgaarb: loaded Jan 23 23:55:18.269597 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:55:18.269616 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:55:18.269634 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:55:18.269653 kernel: pnp: PnP ACPI init Jan 23 23:55:18.269888 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:55:18.269921 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:55:18.269941 kernel: NET: Registered PF_INET protocol family Jan 23 23:55:18.269959 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:55:18.269979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:55:18.269998 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:55:18.270017 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:55:18.270037 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:55:18.270055 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:55:18.270079 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:18.270098 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:55:18.270116 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:55:18.270135 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:55:18.270153 kernel: kvm [1]: HYP mode not available Jan 23 23:55:18.270172 kernel: Initialise system trusted keyrings Jan 23 23:55:18.270191 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:55:18.270209 kernel: Key type asymmetric registered Jan 23 23:55:18.270228 kernel: Asymmetric key parser 'x509' registered Jan 23 23:55:18.270251 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:55:18.270270 kernel: io scheduler mq-deadline registered Jan 23 23:55:18.270289 kernel: io scheduler kyber registered Jan 23 23:55:18.270417 kernel: io scheduler bfq registered Jan 23 23:55:18.270713 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:55:18.270747 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:55:18.270767 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:55:18.270786 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:55:18.270805 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:55:18.270834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:55:18.270854 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:55:18.271085 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:55:18.271113 kernel: printk: console [ttyS0] disabled Jan 23 23:55:18.271132 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:55:18.271152 kernel: printk: console [ttyS0] enabled Jan 23 23:55:18.271170 kernel: printk: bootconsole [uart0] disabled Jan 23 23:55:18.271189 kernel: thunder_xcv, ver 1.0 Jan 23 23:55:18.271207 kernel: thunder_bgx, ver 1.0 Jan 23 23:55:18.271232 kernel: nicpf, ver 1.0 Jan 23 23:55:18.271251 kernel: nicvf, ver 1.0 Jan 23 23:55:18.271559 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:55:18.271756 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:55:17 UTC (1769212517) Jan 23 23:55:18.271783 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:55:18.271802 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:55:18.271821 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:55:18.271840 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:55:18.271866 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:55:18.271885 kernel: Segment Routing with IPv6 Jan 23 23:55:18.271904 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:55:18.271922 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:55:18.271940 kernel: Key type dns_resolver registered Jan 23 23:55:18.271959 kernel: registered taskstats version 1 Jan 23 23:55:18.271977 kernel: Loading compiled-in X.509 certificates Jan 23 23:55:18.271996 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:55:18.272015 kernel: Key type .fscrypt registered Jan 23 23:55:18.272038 kernel: Key type fscrypt-provisioning registered Jan 23 23:55:18.272056 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:55:18.272075 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:55:18.272094 kernel: ima: No architecture policies found Jan 23 23:55:18.272112 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:55:18.272132 kernel: clk: Disabling unused clocks Jan 23 23:55:18.272151 kernel: Freeing unused kernel memory: 39424K Jan 23 23:55:18.272170 kernel: Run /init as init process Jan 23 23:55:18.272190 kernel: with arguments: Jan 23 23:55:18.272214 kernel: /init Jan 23 23:55:18.272233 kernel: with environment: Jan 23 23:55:18.272252 kernel: HOME=/ Jan 23 23:55:18.272270 kernel: TERM=linux Jan 23 23:55:18.272293 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:18.272338 systemd[1]: Detected virtualization amazon. Jan 23 23:55:18.272360 systemd[1]: Detected architecture arm64. Jan 23 23:55:18.272380 systemd[1]: Running in initrd. Jan 23 23:55:18.272407 systemd[1]: No hostname configured, using default hostname. Jan 23 23:55:18.272426 systemd[1]: Hostname set to . Jan 23 23:55:18.272447 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:18.272467 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:55:18.272488 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:18.272508 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:18.272529 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:55:18.272550 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:18.272575 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:55:18.272596 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:55:18.272619 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:55:18.272640 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:55:18.272661 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:18.272681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:18.272706 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:18.272727 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:18.272747 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:18.272767 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:18.272787 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:18.272808 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:18.272828 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:55:18.272848 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:55:18.272868 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:18.272893 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:18.272914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:18.272934 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:18.272954 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:55:18.272974 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:18.272995 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:55:18.273015 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:55:18.273035 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:18.273055 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:18.273080 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:18.273100 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:18.273121 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:18.273141 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:55:18.273197 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:55:18.273247 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:18.273268 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:55:18.273287 kernel: Bridge firewalling registered Jan 23 23:55:18.275379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:18.275413 systemd-journald[251]: Journal started Jan 23 23:55:18.275454 systemd-journald[251]: Runtime Journal (/run/log/journal/ec22dd5c69e54d16765de0d9d9b3c9ba) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:18.224551 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:55:18.267757 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:55:18.290367 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:18.290441 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:18.296664 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:18.311644 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:18.324021 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:18.324696 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:18.346268 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:18.356465 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:18.387379 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:18.396821 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:18.407699 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:18.415028 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:18.425699 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:55:18.473100 dracut-cmdline[289]: dracut-dracut-053 Jan 23 23:55:18.480273 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:55:18.500493 systemd-resolved[287]: Positive Trust Anchors: Jan 23 23:55:18.500527 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:18.500591 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:18.642341 kernel: SCSI subsystem initialized Jan 23 23:55:18.650331 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:55:18.664362 kernel: iscsi: registered transport (tcp) Jan 23 23:55:18.686429 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:55:18.686507 kernel: QLogic iSCSI HBA Driver Jan 23 23:55:18.748336 kernel: random: crng init done Jan 23 23:55:18.748979 systemd-resolved[287]: Defaulting to hostname 'linux'. Jan 23 23:55:18.752584 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:18.757856 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:18.783731 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:18.793805 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:55:18.831065 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:55:18.831143 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:55:18.833385 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:55:18.900350 kernel: raid6: neonx8 gen() 6767 MB/s Jan 23 23:55:18.918337 kernel: raid6: neonx4 gen() 6591 MB/s Jan 23 23:55:18.936348 kernel: raid6: neonx2 gen() 5465 MB/s Jan 23 23:55:18.952342 kernel: raid6: neonx1 gen() 3960 MB/s Jan 23 23:55:18.969348 kernel: raid6: int64x8 gen() 3830 MB/s Jan 23 23:55:18.987340 kernel: raid6: int64x4 gen() 3707 MB/s Jan 23 23:55:19.005339 kernel: raid6: int64x2 gen() 3577 MB/s Jan 23 23:55:19.023430 kernel: raid6: int64x1 gen() 2762 MB/s Jan 23 23:55:19.023489 kernel: raid6: using algorithm neonx8 gen() 6767 MB/s Jan 23 23:55:19.042364 kernel: raid6: .... xor() 4864 MB/s, rmw enabled Jan 23 23:55:19.042406 kernel: raid6: using neon recovery algorithm Jan 23 23:55:19.050338 kernel: xor: measuring software checksum speed Jan 23 23:55:19.052792 kernel: 8regs : 10273 MB/sec Jan 23 23:55:19.052833 kernel: 32regs : 11911 MB/sec Jan 23 23:55:19.054153 kernel: arm64_neon : 9530 MB/sec Jan 23 23:55:19.054186 kernel: xor: using function: 32regs (11911 MB/sec) Jan 23 23:55:19.139356 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:55:19.160057 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:19.171663 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:19.218008 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jan 23 23:55:19.226652 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:19.240606 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:55:19.278451 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jan 23 23:55:19.337881 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:19.349797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:19.465358 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:19.475738 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:55:19.522117 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:19.528334 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:19.531241 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:19.535629 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:19.554048 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:55:19.590380 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:19.672693 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:55:19.672757 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:55:19.676057 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:19.695906 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:55:19.696269 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:55:19.676329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:19.679970 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:19.684214 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:19.684592 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:19.687526 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:19.715741 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:19.729335 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:35:59:28:76:65 Jan 23 23:55:19.734601 (udev-worker)[536]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:19.742462 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:55:19.742541 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:55:19.756339 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:55:19.767016 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:55:19.767102 kernel: GPT:9289727 != 33554431 Jan 23 23:55:19.767130 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:55:19.767156 kernel: GPT:9289727 != 33554431 Jan 23 23:55:19.767181 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:55:19.767206 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:19.773091 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:19.785768 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:55:19.820967 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:19.900373 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jan 23 23:55:19.930085 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:55:19.936959 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (520) Jan 23 23:55:20.014211 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:55:20.041855 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:20.044680 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:55:20.062322 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:20.077580 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:55:20.091699 disk-uuid[664]: Primary Header is updated. Jan 23 23:55:20.091699 disk-uuid[664]: Secondary Entries is updated. Jan 23 23:55:20.091699 disk-uuid[664]: Secondary Header is updated. Jan 23 23:55:20.106327 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:20.117360 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:20.129330 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:21.129060 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:55:21.129961 disk-uuid[665]: The operation has completed successfully. Jan 23 23:55:21.310246 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:55:21.310474 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:55:21.375569 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:55:21.386513 sh[1012]: Success Jan 23 23:55:21.414541 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:55:21.519203 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:55:21.532543 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:55:21.543405 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:55:21.578674 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:55:21.578736 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:21.578763 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:55:21.580614 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:55:21.582012 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:55:21.685342 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:55:21.699945 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:55:21.704602 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:55:21.718558 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:55:21.728643 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:55:21.753677 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:21.753756 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:21.753783 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:21.777356 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:21.792073 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:55:21.797356 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:21.809376 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:55:21.824685 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:55:21.904257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:21.925862 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:21.970939 systemd-networkd[1204]: lo: Link UP Jan 23 23:55:21.970961 systemd-networkd[1204]: lo: Gained carrier Jan 23 23:55:21.973483 systemd-networkd[1204]: Enumeration completed Jan 23 23:55:21.974373 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:21.974476 systemd-networkd[1204]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:21.974483 systemd-networkd[1204]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:21.991716 systemd[1]: Reached target network.target - Network. Jan 23 23:55:21.999557 systemd-networkd[1204]: eth0: Link UP Jan 23 23:55:21.999570 systemd-networkd[1204]: eth0: Gained carrier Jan 23 23:55:21.999588 systemd-networkd[1204]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:22.023405 systemd-networkd[1204]: eth0: DHCPv4 address 172.31.28.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:22.316826 ignition[1137]: Ignition 2.19.0 Jan 23 23:55:22.316846 ignition[1137]: Stage: fetch-offline Jan 23 23:55:22.319021 ignition[1137]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:22.319046 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:22.319988 ignition[1137]: Ignition finished successfully Jan 23 23:55:22.330490 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:22.343715 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:55:22.378656 ignition[1221]: Ignition 2.19.0 Jan 23 23:55:22.378685 ignition[1221]: Stage: fetch Jan 23 23:55:22.379418 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:22.379443 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:22.379609 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:22.392967 ignition[1221]: PUT result: OK Jan 23 23:55:22.395874 ignition[1221]: parsed url from cmdline: "" Jan 23 23:55:22.395890 ignition[1221]: no config URL provided Jan 23 23:55:22.395905 ignition[1221]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:55:22.395932 ignition[1221]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:55:22.395963 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:22.402659 ignition[1221]: PUT result: OK Jan 23 23:55:22.403391 ignition[1221]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:55:22.409816 ignition[1221]: GET result: OK Jan 23 23:55:22.410005 ignition[1221]: parsing config with SHA512: fec0909f84a2045d40ad6065b086fd22ad41eae4357f335f78aeb24ddceea1a39670a0075c9fc3d5e1a9b30ebe2b8d30a8b13dc48852b416fa25f3c265031be8 Jan 23 23:55:22.418526 unknown[1221]: fetched base config from "system" Jan 23 23:55:22.422655 unknown[1221]: fetched base config from "system" Jan 23 23:55:22.423432 ignition[1221]: fetch: fetch complete Jan 23 23:55:22.422682 unknown[1221]: fetched user config from "aws" Jan 23 23:55:22.423445 ignition[1221]: fetch: fetch passed Jan 23 23:55:22.423551 ignition[1221]: Ignition finished successfully Jan 23 23:55:22.435104 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:55:22.447674 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:55:22.479774 ignition[1228]: Ignition 2.19.0 Jan 23 23:55:22.480267 ignition[1228]: Stage: kargs Jan 23 23:55:22.481391 ignition[1228]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:22.482075 ignition[1228]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:22.482237 ignition[1228]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:22.490812 ignition[1228]: PUT result: OK Jan 23 23:55:22.497821 ignition[1228]: kargs: kargs passed Jan 23 23:55:22.498127 ignition[1228]: Ignition finished successfully Jan 23 23:55:22.504407 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:55:22.514622 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:55:22.541891 ignition[1235]: Ignition 2.19.0 Jan 23 23:55:22.541918 ignition[1235]: Stage: disks Jan 23 23:55:22.542599 ignition[1235]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:22.542625 ignition[1235]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:22.542801 ignition[1235]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:22.544914 ignition[1235]: PUT result: OK Jan 23 23:55:22.557794 ignition[1235]: disks: disks passed Jan 23 23:55:22.557974 ignition[1235]: Ignition finished successfully Jan 23 23:55:22.561875 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:55:22.566952 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:22.570513 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:55:22.578485 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:22.580765 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:22.583453 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:22.598756 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:55:22.649587 systemd-fsck[1243]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:55:22.655256 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:55:22.669698 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:55:22.750355 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:55:22.751026 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:55:22.755237 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:22.774516 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:22.780537 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:55:22.788602 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:55:22.788706 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:55:22.799416 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:22.813336 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1262) Jan 23 23:55:22.817952 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:22.818021 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:22.819444 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:22.824607 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:55:22.833598 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:55:22.845359 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:22.849750 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:23.145633 initrd-setup-root[1286]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:55:23.167231 initrd-setup-root[1293]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:55:23.176806 initrd-setup-root[1300]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:55:23.186991 initrd-setup-root[1307]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:55:23.515675 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:23.525621 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:55:23.539615 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:55:23.556975 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:55:23.560987 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:23.584874 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:55:23.616257 ignition[1375]: INFO : Ignition 2.19.0 Jan 23 23:55:23.618434 ignition[1375]: INFO : Stage: mount Jan 23 23:55:23.618434 ignition[1375]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:23.618434 ignition[1375]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:23.618434 ignition[1375]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:23.628636 ignition[1375]: INFO : PUT result: OK Jan 23 23:55:23.633740 ignition[1375]: INFO : mount: mount passed Jan 23 23:55:23.635584 ignition[1375]: INFO : Ignition finished successfully Jan 23 23:55:23.636177 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:55:23.649502 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:55:23.760763 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:55:23.791350 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1387) Jan 23 23:55:23.796065 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:55:23.796133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:55:23.796160 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:55:23.804357 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:55:23.806146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:55:23.841725 ignition[1404]: INFO : Ignition 2.19.0 Jan 23 23:55:23.841725 ignition[1404]: INFO : Stage: files Jan 23 23:55:23.845657 ignition[1404]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:23.845657 ignition[1404]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:23.845657 ignition[1404]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:23.854759 ignition[1404]: INFO : PUT result: OK Jan 23 23:55:23.858856 ignition[1404]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:55:23.861596 ignition[1404]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:55:23.861596 ignition[1404]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:55:23.920537 ignition[1404]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:55:23.923850 ignition[1404]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:55:23.927441 unknown[1404]: wrote ssh authorized keys file for user: core Jan 23 23:55:23.929958 ignition[1404]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:55:23.935572 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:55:23.939917 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:55:23.979482 systemd-networkd[1204]: eth0: Gained IPv6LL Jan 23 23:55:24.039148 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:55:24.444896 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:55:24.449339 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:24.453642 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:55:24.457924 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:24.461943 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:55:24.461943 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:24.470140 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:55:24.474145 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:24.478247 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:55:24.482369 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:24.486519 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:55:24.490516 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:24.496477 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:24.502224 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:24.502224 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 23 23:55:24.844699 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:55:25.220795 ignition[1404]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 23 23:55:25.220795 ignition[1404]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:55:25.229167 ignition[1404]: INFO : files: files passed Jan 23 23:55:25.229167 ignition[1404]: INFO : Ignition finished successfully Jan 23 23:55:25.259982 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:55:25.271593 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:55:25.287656 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:55:25.300228 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:55:25.301489 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:55:25.317231 initrd-setup-root-after-ignition[1432]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:25.317231 initrd-setup-root-after-ignition[1432]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:25.325845 initrd-setup-root-after-ignition[1436]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:55:25.331288 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:25.338192 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:55:25.348694 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:55:25.398884 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:55:25.399134 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:55:25.405243 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:55:25.412092 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:55:25.414446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:55:25.427677 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:55:25.455956 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:25.470370 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:55:25.494894 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:25.500427 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:25.509132 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:55:25.513091 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:55:25.513389 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:55:25.521108 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:55:25.521441 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:55:25.527932 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:55:25.530868 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:55:25.536087 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:55:25.545556 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:55:25.548170 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:55:25.555944 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:55:25.559060 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:55:25.565214 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:55:25.568068 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:55:25.568328 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:55:25.576441 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:25.579332 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:25.587098 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:55:25.587443 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:25.592543 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:55:25.592952 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:55:25.603074 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:55:25.603835 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:55:25.611792 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:55:25.612013 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:55:25.622774 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:55:25.629539 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:55:25.632861 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:55:25.633430 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:25.644878 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:55:25.645128 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:55:25.670741 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:55:25.672365 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:55:25.685836 ignition[1456]: INFO : Ignition 2.19.0 Jan 23 23:55:25.687948 ignition[1456]: INFO : Stage: umount Jan 23 23:55:25.689646 ignition[1456]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:55:25.689646 ignition[1456]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:55:25.689646 ignition[1456]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:55:25.698472 ignition[1456]: INFO : PUT result: OK Jan 23 23:55:25.704668 ignition[1456]: INFO : umount: umount passed Jan 23 23:55:25.709803 ignition[1456]: INFO : Ignition finished successfully Jan 23 23:55:25.705685 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:55:25.709515 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:55:25.711934 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:55:25.718553 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:55:25.718794 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:55:25.733655 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:55:25.733839 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:55:25.740354 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:55:25.740476 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:55:25.744928 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:55:25.745030 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:55:25.749386 systemd[1]: Stopped target network.target - Network. Jan 23 23:55:25.753375 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:55:25.753610 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:55:25.764905 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:55:25.766956 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:55:25.770431 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:25.773265 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:55:25.778771 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:55:25.783271 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:55:25.783374 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:55:25.788848 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:55:25.788921 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:55:25.791340 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:55:25.791428 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:55:25.793729 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:55:25.793810 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:55:25.798342 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:55:25.798452 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:55:25.804677 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:55:25.813265 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:55:25.823382 systemd-networkd[1204]: eth0: DHCPv6 lease lost Jan 23 23:55:25.828140 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:55:25.828409 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:55:25.833849 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:55:25.836973 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:55:25.853001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:55:25.853089 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:25.869979 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:55:25.872874 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:55:25.873000 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:55:25.876395 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:55:25.876505 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:25.878829 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:55:25.878922 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:25.879152 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:55:25.879224 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:25.880031 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:25.930113 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:55:25.932741 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:25.945230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:55:25.945621 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:25.952656 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:55:25.952734 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:25.955555 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:55:25.955644 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:55:25.963114 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:55:25.963207 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:55:25.965897 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:55:25.965984 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:55:25.986114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:55:25.993318 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:55:25.993602 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:26.002826 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:55:26.003072 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:26.014165 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:55:26.014274 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:26.019764 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:55:26.019868 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:26.027768 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:55:26.028143 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:55:26.032093 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:55:26.034007 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:55:26.040828 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:55:26.060552 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:55:26.077470 systemd[1]: Switching root. Jan 23 23:55:26.141339 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:55:26.141420 systemd-journald[251]: Journal stopped Jan 23 23:55:28.722889 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:55:28.723027 kernel: SELinux: policy capability open_perms=1 Jan 23 23:55:28.723061 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:55:28.723099 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:55:28.723130 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:55:28.723170 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:55:28.723202 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:55:28.723231 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:55:28.723259 kernel: audit: type=1403 audit(1769212526.783:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:55:28.723323 systemd[1]: Successfully loaded SELinux policy in 62.006ms. Jan 23 23:55:28.723379 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.312ms. Jan 23 23:55:28.723416 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:55:28.723453 systemd[1]: Detected virtualization amazon. Jan 23 23:55:28.723486 systemd[1]: Detected architecture arm64. Jan 23 23:55:28.723515 systemd[1]: Detected first boot. Jan 23 23:55:28.723548 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:55:28.723591 zram_generator::config[1499]: No configuration found. Jan 23 23:55:28.723638 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:55:28.723671 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:55:28.723705 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:55:28.723741 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:28.723774 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:55:28.723808 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:55:28.723838 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:55:28.723871 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:55:28.723905 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:55:28.723940 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:55:28.723974 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:55:28.724010 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:55:28.724041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:55:28.724073 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:55:28.724104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:55:28.724140 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:55:28.724176 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:55:28.724222 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:55:28.724261 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:55:28.724296 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:55:28.726453 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:55:28.726490 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:55:28.726523 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:55:28.726557 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:55:28.726592 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:55:28.726646 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:55:28.726680 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:55:28.726714 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:55:28.726750 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:55:28.726784 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:55:28.726818 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:55:28.726849 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:55:28.726878 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:55:28.726908 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:55:28.726938 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:55:28.726970 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:55:28.727000 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:55:28.727034 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:55:28.727068 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:55:28.727099 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:55:28.727135 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:55:28.727165 systemd[1]: Reached target machines.target - Containers. Jan 23 23:55:28.727195 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:55:28.727226 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:28.727255 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:55:28.727285 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:55:28.728487 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:28.728549 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:28.728580 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:28.728613 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:55:28.728643 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:28.728676 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:55:28.728706 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:55:28.728736 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:55:28.728773 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:55:28.728803 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:55:28.728835 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:55:28.728865 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:55:28.728893 kernel: loop: module loaded Jan 23 23:55:28.728923 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:55:28.728954 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:55:28.728986 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:55:28.729020 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:55:28.729056 systemd[1]: Stopped verity-setup.service. Jan 23 23:55:28.729090 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:55:28.729120 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:55:28.729150 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:55:28.729181 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:55:28.729211 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:55:28.729245 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:55:28.729275 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:55:28.730350 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:55:28.730402 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:55:28.730432 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:28.730462 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:28.730496 kernel: fuse: init (API version 7.39) Jan 23 23:55:28.730528 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:28.730567 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:28.730600 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:28.730654 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:28.730691 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:55:28.730725 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:55:28.730817 systemd-journald[1584]: Collecting audit messages is disabled. Jan 23 23:55:28.730886 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:55:28.730929 systemd-journald[1584]: Journal started Jan 23 23:55:28.731025 systemd-journald[1584]: Runtime Journal (/run/log/journal/ec22dd5c69e54d16765de0d9d9b3c9ba) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:55:28.733447 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:55:28.053697 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:55:28.113147 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:55:28.113956 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:55:28.741366 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:28.779009 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:55:28.779119 kernel: ACPI: bus type drm_connector registered Jan 23 23:55:28.785455 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:55:28.793353 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:55:28.797508 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:55:28.801436 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:55:28.805026 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:28.805315 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:28.808152 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:55:28.811998 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:55:28.831455 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:55:28.856812 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:55:28.859565 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:55:28.859613 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:55:28.868790 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:55:28.881625 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:55:28.894662 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:55:28.898723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:28.904419 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:55:28.910662 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:55:28.916578 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:28.920647 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:55:28.927662 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:55:28.940931 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 23 23:55:28.940973 systemd-tmpfiles[1602]: ACLs are not supported, ignoring. Jan 23 23:55:28.944873 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:55:28.952040 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:55:28.957408 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:55:28.968342 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:55:28.988290 systemd-journald[1584]: Time spent on flushing to /var/log/journal/ec22dd5c69e54d16765de0d9d9b3c9ba is 61.575ms for 905 entries. Jan 23 23:55:28.988290 systemd-journald[1584]: System Journal (/var/log/journal/ec22dd5c69e54d16765de0d9d9b3c9ba) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:55:29.085077 systemd-journald[1584]: Received client request to flush runtime journal. Jan 23 23:55:29.085174 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:55:28.996444 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:55:29.015838 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:55:29.033783 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:55:29.038991 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:55:29.054647 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:55:29.096366 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:55:29.115938 udevadm[1635]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:55:29.129819 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:55:29.131041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:55:29.134523 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:55:29.173704 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:55:29.199376 kernel: loop1: detected capacity change from 0 to 211168 Jan 23 23:55:29.205414 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:55:29.220226 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:55:29.266785 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Jan 23 23:55:29.266826 systemd-tmpfiles[1650]: ACLs are not supported, ignoring. Jan 23 23:55:29.276832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:55:29.317346 kernel: loop2: detected capacity change from 0 to 52536 Jan 23 23:55:29.375923 kernel: loop3: detected capacity change from 0 to 114432 Jan 23 23:55:29.512460 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:55:29.534956 kernel: loop5: detected capacity change from 0 to 211168 Jan 23 23:55:29.568342 kernel: loop6: detected capacity change from 0 to 52536 Jan 23 23:55:29.591438 kernel: loop7: detected capacity change from 0 to 114432 Jan 23 23:55:29.601419 (sd-merge)[1656]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:55:29.602412 (sd-merge)[1656]: Merged extensions into '/usr'. Jan 23 23:55:29.616886 systemd[1]: Reloading requested from client PID 1632 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:55:29.617451 systemd[1]: Reloading... Jan 23 23:55:29.796344 zram_generator::config[1679]: No configuration found. Jan 23 23:55:30.159741 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:30.271552 systemd[1]: Reloading finished in 652 ms. Jan 23 23:55:30.311263 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:55:30.314771 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:55:30.333742 systemd[1]: Starting ensure-sysext.service... Jan 23 23:55:30.344777 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:55:30.351675 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:55:30.393495 systemd[1]: Reloading requested from client PID 1734 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:55:30.393528 systemd[1]: Reloading... Jan 23 23:55:30.424109 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:55:30.424785 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:55:30.441186 systemd-tmpfiles[1735]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:55:30.444755 systemd-udevd[1736]: Using default interface naming scheme 'v255'. Jan 23 23:55:30.447282 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jan 23 23:55:30.448584 systemd-tmpfiles[1735]: ACLs are not supported, ignoring. Jan 23 23:55:30.470870 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:30.470896 systemd-tmpfiles[1735]: Skipping /boot Jan 23 23:55:30.514409 systemd-tmpfiles[1735]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:55:30.514430 systemd-tmpfiles[1735]: Skipping /boot Jan 23 23:55:30.584038 ldconfig[1627]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:55:30.626350 zram_generator::config[1772]: No configuration found. Jan 23 23:55:30.805504 (udev-worker)[1778]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:31.000105 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:31.101337 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1770) Jan 23 23:55:31.148761 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:55:31.149380 systemd[1]: Reloading finished in 755 ms. Jan 23 23:55:31.182188 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:55:31.189959 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:55:31.201350 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:55:31.276652 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:31.296519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:55:31.327748 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:55:31.340273 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:55:31.355647 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:55:31.382830 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:55:31.403809 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:55:31.417936 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:55:31.475839 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:55:31.493609 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:55:31.507359 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:55:31.529457 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:31.538946 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:55:31.552896 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:31.560589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:55:31.570750 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:55:31.573230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:31.577855 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:55:31.592931 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:55:31.598447 augenrules[1962]: No rules Jan 23 23:55:31.605834 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:55:31.608118 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:55:31.614471 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:31.618164 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:55:31.623875 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:31.624182 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:31.642911 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:55:31.647845 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:55:31.655333 lvm[1955]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:31.655833 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:55:31.658269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:55:31.658705 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:55:31.661109 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:55:31.671208 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:55:31.671555 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:55:31.677823 systemd[1]: Finished ensure-sysext.service. Jan 23 23:55:31.702926 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:55:31.703317 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:55:31.706242 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:55:31.710713 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:55:31.711101 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:55:31.730409 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:55:31.731635 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:55:31.735989 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:55:31.747223 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:55:31.751030 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:55:31.752566 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:55:31.760087 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:55:31.771713 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:55:31.799349 lvm[1985]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:55:31.819968 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:55:31.823107 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:55:31.844749 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:55:31.937266 systemd-networkd[1938]: lo: Link UP Jan 23 23:55:31.937865 systemd-networkd[1938]: lo: Gained carrier Jan 23 23:55:31.941136 systemd-networkd[1938]: Enumeration completed Jan 23 23:55:31.941616 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:55:31.944020 systemd-networkd[1938]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:31.944035 systemd-networkd[1938]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:55:31.946425 systemd-networkd[1938]: eth0: Link UP Jan 23 23:55:31.946951 systemd-networkd[1938]: eth0: Gained carrier Jan 23 23:55:31.947094 systemd-networkd[1938]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:55:31.954795 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:55:31.958578 systemd-resolved[1942]: Positive Trust Anchors: Jan 23 23:55:31.958632 systemd-resolved[1942]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:55:31.958696 systemd-resolved[1942]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:55:31.962069 systemd-networkd[1938]: eth0: DHCPv4 address 172.31.28.204/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:55:31.985292 systemd-resolved[1942]: Defaulting to hostname 'linux'. Jan 23 23:55:31.988678 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:55:31.991414 systemd[1]: Reached target network.target - Network. Jan 23 23:55:31.993490 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:55:31.996242 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:55:31.998848 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:55:32.001760 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:55:32.004948 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:55:32.007659 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:55:32.010431 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:55:32.013200 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:55:32.013252 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:55:32.015562 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:55:32.019357 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:55:32.024525 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:55:32.033108 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:55:32.036629 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:55:32.039424 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:55:32.041740 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:55:32.044001 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:32.044168 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:55:32.052522 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:55:32.060134 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:55:32.066720 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:55:32.080512 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:55:32.086639 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:55:32.089859 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:55:32.098659 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:55:32.107736 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:55:32.115455 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:55:32.123059 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:55:32.129966 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:55:32.139696 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:55:32.150629 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:55:32.158229 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:55:32.160778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:55:32.165718 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:55:32.171050 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:55:32.195421 jq[2002]: false Jan 23 23:55:32.196888 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:55:32.197263 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:55:32.283090 dbus-daemon[2001]: [system] SELinux support is enabled Jan 23 23:55:32.289733 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:55:32.300799 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:55:32.300923 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:55:32.304245 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:55:32.304291 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:55:32.311365 jq[2012]: true Jan 23 23:55:32.316990 (ntainerd)[2024]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:55:32.325652 dbus-daemon[2001]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1938 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:32.330189 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:55:32.333363 extend-filesystems[2003]: Found loop4 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found loop5 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found loop6 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found loop7 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p1 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p2 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p3 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found usr Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p4 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p6 Jan 23 23:55:32.333363 extend-filesystems[2003]: Found nvme0n1p7 Jan 23 23:55:32.377514 extend-filesystems[2003]: Found nvme0n1p9 Jan 23 23:55:32.377514 extend-filesystems[2003]: Checking size of /dev/nvme0n1p9 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: ---------------------------------------------------- Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: corporation. Support and training for ntp-4 are Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: available at https://www.nwtime.org/support Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: ---------------------------------------------------- Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: proto: precision = 0.096 usec (-23) Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: basedate set to 2026-01-11 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listen normally on 3 eth0 172.31.28.204:123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: bind(21) AF_INET6 fe80::435:59ff:fe28:7665%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: unable to create socket on eth0 (5) for fe80::435:59ff:fe28:7665%2#123 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: failed to init interface for address fe80::435:59ff:fe28:7665%2 Jan 23 23:55:32.390125 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:32.369493 ntpd[2005]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:55:32.335545 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:55:32.369538 ntpd[2005]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:55:32.424950 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:32.424950 ntpd[2005]: 23 Jan 23:55:32 ntpd[2005]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:32.353777 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:55:32.369559 ntpd[2005]: ---------------------------------------------------- Jan 23 23:55:32.354122 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:55:32.369578 ntpd[2005]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:55:32.418795 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:55:32.369598 ntpd[2005]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:55:32.369617 ntpd[2005]: corporation. Support and training for ntp-4 are Jan 23 23:55:32.369636 ntpd[2005]: available at https://www.nwtime.org/support Jan 23 23:55:32.369655 ntpd[2005]: ---------------------------------------------------- Jan 23 23:55:32.376016 ntpd[2005]: proto: precision = 0.096 usec (-23) Jan 23 23:55:32.376957 ntpd[2005]: basedate set to 2026-01-11 Jan 23 23:55:32.376990 ntpd[2005]: gps base set to 2026-01-11 (week 2401) Jan 23 23:55:32.385844 ntpd[2005]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:55:32.385922 ntpd[2005]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:55:32.386205 ntpd[2005]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:55:32.386268 ntpd[2005]: Listen normally on 3 eth0 172.31.28.204:123 Jan 23 23:55:32.386359 ntpd[2005]: Listen normally on 4 lo [::1]:123 Jan 23 23:55:32.386439 ntpd[2005]: bind(21) AF_INET6 fe80::435:59ff:fe28:7665%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:55:32.386478 ntpd[2005]: unable to create socket on eth0 (5) for fe80::435:59ff:fe28:7665%2#123 Jan 23 23:55:32.386506 ntpd[2005]: failed to init interface for address fe80::435:59ff:fe28:7665%2 Jan 23 23:55:32.386557 ntpd[2005]: Listening on routing socket on fd #21 for interface updates Jan 23 23:55:32.388704 dbus-daemon[2001]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:55:32.417894 ntpd[2005]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:32.417957 ntpd[2005]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:55:32.457760 jq[2036]: true Jan 23 23:55:32.481418 tar[2023]: linux-arm64/LICENSE Jan 23 23:55:32.481418 tar[2023]: linux-arm64/helm Jan 23 23:55:32.483043 extend-filesystems[2003]: Resized partition /dev/nvme0n1p9 Jan 23 23:55:32.498832 extend-filesystems[2052]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:55:32.525646 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:55:32.544997 update_engine[2011]: I20260123 23:55:32.526286 2011 main.cc:92] Flatcar Update Engine starting Jan 23 23:55:32.543878 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:55:32.546336 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:55:32.557851 update_engine[2011]: I20260123 23:55:32.555616 2011 update_check_scheduler.cc:74] Next update check in 6m25s Jan 23 23:55:32.559696 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:55:32.644484 coreos-metadata[2000]: Jan 23 23:55:32.644 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:32.656324 coreos-metadata[2000]: Jan 23 23:55:32.653 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:55:32.657140 coreos-metadata[2000]: Jan 23 23:55:32.657 INFO Fetch successful Jan 23 23:55:32.657271 coreos-metadata[2000]: Jan 23 23:55:32.657 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:55:32.671343 coreos-metadata[2000]: Jan 23 23:55:32.668 INFO Fetch successful Jan 23 23:55:32.671343 coreos-metadata[2000]: Jan 23 23:55:32.668 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:55:32.672459 coreos-metadata[2000]: Jan 23 23:55:32.672 INFO Fetch successful Jan 23 23:55:32.672589 coreos-metadata[2000]: Jan 23 23:55:32.672 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:55:32.682323 coreos-metadata[2000]: Jan 23 23:55:32.680 INFO Fetch successful Jan 23 23:55:32.682323 coreos-metadata[2000]: Jan 23 23:55:32.680 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:55:32.684326 coreos-metadata[2000]: Jan 23 23:55:32.683 INFO Fetch failed with 404: resource not found Jan 23 23:55:32.684326 coreos-metadata[2000]: Jan 23 23:55:32.683 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:55:32.684236 systemd-logind[2010]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:55:32.684271 systemd-logind[2010]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:55:32.685246 coreos-metadata[2000]: Jan 23 23:55:32.685 INFO Fetch successful Jan 23 23:55:32.685803 systemd-logind[2010]: New seat seat0. Jan 23 23:55:32.686169 coreos-metadata[2000]: Jan 23 23:55:32.686 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:55:32.696225 coreos-metadata[2000]: Jan 23 23:55:32.694 INFO Fetch successful Jan 23 23:55:32.696225 coreos-metadata[2000]: Jan 23 23:55:32.694 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:55:32.696225 coreos-metadata[2000]: Jan 23 23:55:32.696 INFO Fetch successful Jan 23 23:55:32.696464 coreos-metadata[2000]: Jan 23 23:55:32.696 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:55:32.700867 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:55:32.704758 coreos-metadata[2000]: Jan 23 23:55:32.704 INFO Fetch successful Jan 23 23:55:32.720030 coreos-metadata[2000]: Jan 23 23:55:32.704 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:55:32.720030 coreos-metadata[2000]: Jan 23 23:55:32.709 INFO Fetch successful Jan 23 23:55:32.747557 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:55:32.747637 bash[2076]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:32.771114 extend-filesystems[2052]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:55:32.771114 extend-filesystems[2052]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:55:32.771114 extend-filesystems[2052]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:55:32.783504 extend-filesystems[2003]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:55:32.811854 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:55:32.812228 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:55:32.841341 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1770) Jan 23 23:55:32.819266 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:55:32.848880 dbus-daemon[2001]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:55:32.853984 systemd[1]: Starting sshkeys.service... Jan 23 23:55:32.856748 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:55:32.861638 dbus-daemon[2001]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2042 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:55:32.873475 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:55:32.913409 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:55:32.918484 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:55:32.952225 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:55:32.974267 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:55:33.016058 polkitd[2097]: Started polkitd version 121 Jan 23 23:55:33.074106 locksmithd[2056]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:55:33.092845 polkitd[2097]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:55:33.094845 polkitd[2097]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:55:33.100588 polkitd[2097]: Finished loading, compiling and executing 2 rules Jan 23 23:55:33.107986 dbus-daemon[2001]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:55:33.109450 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:55:33.113630 polkitd[2097]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:55:33.183842 systemd-hostnamed[2042]: Hostname set to (transient) Jan 23 23:55:33.184027 systemd-resolved[1942]: System hostname changed to 'ip-172-31-28-204'. Jan 23 23:55:33.259544 systemd-networkd[1938]: eth0: Gained IPv6LL Jan 23 23:55:33.286292 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:55:33.294680 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:55:33.297571 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:55:33.307410 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:55:33.331847 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:33.342401 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:55:33.358493 coreos-metadata[2117]: Jan 23 23:55:33.357 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:55:33.364793 coreos-metadata[2117]: Jan 23 23:55:33.359 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:55:33.364793 coreos-metadata[2117]: Jan 23 23:55:33.364 INFO Fetch successful Jan 23 23:55:33.364793 coreos-metadata[2117]: Jan 23 23:55:33.364 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:55:33.374964 coreos-metadata[2117]: Jan 23 23:55:33.373 INFO Fetch successful Jan 23 23:55:33.385888 unknown[2117]: wrote ssh authorized keys file for user: core Jan 23 23:55:33.400985 containerd[2024]: time="2026-01-23T23:55:33.399927334Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:55:33.526976 update-ssh-keys[2193]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:55:33.530424 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:55:33.546378 systemd[1]: Finished sshkeys.service. Jan 23 23:55:33.598416 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:55:33.607019 containerd[2024]: time="2026-01-23T23:55:33.606682080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.622116 containerd[2024]: time="2026-01-23T23:55:33.621777972Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:33.622116 containerd[2024]: time="2026-01-23T23:55:33.621856152Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:55:33.622116 containerd[2024]: time="2026-01-23T23:55:33.621897792Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.624515076Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.624581184Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.624707340Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.624739356Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.625043448Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.625076232Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.625106736Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626340 containerd[2024]: time="2026-01-23T23:55:33.625131444Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.626803 containerd[2024]: time="2026-01-23T23:55:33.625282500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.627114 containerd[2024]: time="2026-01-23T23:55:33.626928804Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:55:33.627226 containerd[2024]: time="2026-01-23T23:55:33.627177852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:55:33.627284 containerd[2024]: time="2026-01-23T23:55:33.627222492Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:55:33.627475 containerd[2024]: time="2026-01-23T23:55:33.627431736Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:55:33.627590 containerd[2024]: time="2026-01-23T23:55:33.627551412Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:55:33.640415 containerd[2024]: time="2026-01-23T23:55:33.640271904Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:55:33.640927 containerd[2024]: time="2026-01-23T23:55:33.640713216Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:55:33.640927 containerd[2024]: time="2026-01-23T23:55:33.640861980Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:55:33.640927 containerd[2024]: time="2026-01-23T23:55:33.640912116Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:55:33.641180 containerd[2024]: time="2026-01-23T23:55:33.640950000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:55:33.641388 containerd[2024]: time="2026-01-23T23:55:33.641237016Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:55:33.646015 containerd[2024]: time="2026-01-23T23:55:33.645948456Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:55:33.646275 containerd[2024]: time="2026-01-23T23:55:33.646233456Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:55:33.646374 containerd[2024]: time="2026-01-23T23:55:33.646284684Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:55:33.646374 containerd[2024]: time="2026-01-23T23:55:33.646347660Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:55:33.646495 containerd[2024]: time="2026-01-23T23:55:33.646388472Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646495 containerd[2024]: time="2026-01-23T23:55:33.646421196Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646495 containerd[2024]: time="2026-01-23T23:55:33.646451700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646495 containerd[2024]: time="2026-01-23T23:55:33.646484004Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646516920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646546776Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646575504Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646626576Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646671516Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646712 containerd[2024]: time="2026-01-23T23:55:33.646703844Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646733820Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646765668Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646795248Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646826508Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646854456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646899060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646932108Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.646980 containerd[2024]: time="2026-01-23T23:55:33.646966536Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.646996056Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647026908Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647060628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647102124Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647153460Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647182824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.648741 containerd[2024]: time="2026-01-23T23:55:33.647209308Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:55:33.650771 containerd[2024]: time="2026-01-23T23:55:33.650446656Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:55:33.650876 containerd[2024]: time="2026-01-23T23:55:33.650786136Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:55:33.650876 containerd[2024]: time="2026-01-23T23:55:33.650817252Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:55:33.650876 containerd[2024]: time="2026-01-23T23:55:33.650846724Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:55:33.651007 containerd[2024]: time="2026-01-23T23:55:33.650871492Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.651007 containerd[2024]: time="2026-01-23T23:55:33.650910324Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:55:33.651007 containerd[2024]: time="2026-01-23T23:55:33.650938116Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:55:33.651007 containerd[2024]: time="2026-01-23T23:55:33.650969292Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:55:33.652448 containerd[2024]: time="2026-01-23T23:55:33.651506268Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:55:33.652448 containerd[2024]: time="2026-01-23T23:55:33.651614868Z" level=info msg="Connect containerd service" Jan 23 23:55:33.652448 containerd[2024]: time="2026-01-23T23:55:33.651664848Z" level=info msg="using legacy CRI server" Jan 23 23:55:33.652448 containerd[2024]: time="2026-01-23T23:55:33.651681948Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:55:33.652448 containerd[2024]: time="2026-01-23T23:55:33.652023996Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659118948Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659472912Z" level=info msg="Start subscribing containerd event" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659550552Z" level=info msg="Start recovering state" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659673516Z" level=info msg="Start event monitor" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659698044Z" level=info msg="Start snapshots syncer" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659720100Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:55:33.660133 containerd[2024]: time="2026-01-23T23:55:33.659738892Z" level=info msg="Start streaming server" Jan 23 23:55:33.669369 containerd[2024]: time="2026-01-23T23:55:33.663351576Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:55:33.669369 containerd[2024]: time="2026-01-23T23:55:33.663475068Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:55:33.669369 containerd[2024]: time="2026-01-23T23:55:33.665862876Z" level=info msg="containerd successfully booted in 0.273810s" Jan 23 23:55:33.663706 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:55:33.674569 amazon-ssm-agent[2184]: Initializing new seelog logger Jan 23 23:55:33.675036 amazon-ssm-agent[2184]: New Seelog Logger Creation Complete Jan 23 23:55:33.675036 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.675036 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.680949 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 processing appconfig overrides Jan 23 23:55:33.683099 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.683099 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.683835 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO Proxy environment variables: Jan 23 23:55:33.683927 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 processing appconfig overrides Jan 23 23:55:33.685068 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.685068 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.687998 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 processing appconfig overrides Jan 23 23:55:33.695273 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.695273 amazon-ssm-agent[2184]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:55:33.695448 amazon-ssm-agent[2184]: 2026/01/23 23:55:33 processing appconfig overrides Jan 23 23:55:33.783810 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO no_proxy: Jan 23 23:55:33.883727 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO https_proxy: Jan 23 23:55:33.984525 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO http_proxy: Jan 23 23:55:34.083457 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:55:34.184443 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:55:34.284963 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO Agent will take identity from EC2 Jan 23 23:55:34.383450 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:34.482176 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:34.560437 tar[2023]: linux-arm64/README.md Jan 23 23:55:34.582340 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:55:34.603979 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:55:34.681673 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:55:34.782867 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:55:34.857690 sshd_keygen[2045]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:55:34.881376 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:55:34.904392 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:55:34.918527 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:55:34.930378 systemd[1]: Started sshd@0-172.31.28.204:22-4.153.228.146:34930.service - OpenSSH per-connection server daemon (4.153.228.146:34930). Jan 23 23:55:34.959865 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:55:34.963702 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:55:34.976090 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:55:34.981688 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:55:35.018551 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:55:35.035809 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:55:35.045847 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:55:35.051156 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:55:35.084129 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [Registrar] Starting registrar module Jan 23 23:55:35.184475 amazon-ssm-agent[2184]: 2026-01-23 23:55:33 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:55:35.370568 ntpd[2005]: Listen normally on 6 eth0 [fe80::435:59ff:fe28:7665%2]:123 Jan 23 23:55:35.371069 ntpd[2005]: 23 Jan 23:55:35 ntpd[2005]: Listen normally on 6 eth0 [fe80::435:59ff:fe28:7665%2]:123 Jan 23 23:55:35.524887 sshd[2236]: Accepted publickey for core from 4.153.228.146 port 34930 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:35.532179 sshd[2236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:35.561803 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:55:35.580792 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:55:35.593381 systemd-logind[2010]: New session 1 of user core. Jan 23 23:55:35.626291 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:55:35.644055 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:55:35.668219 (systemd)[2247]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:55:35.737367 amazon-ssm-agent[2184]: 2026-01-23 23:55:35 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:55:35.787190 amazon-ssm-agent[2184]: 2026-01-23 23:55:35 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:55:35.787190 amazon-ssm-agent[2184]: 2026-01-23 23:55:35 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:55:35.787190 amazon-ssm-agent[2184]: 2026-01-23 23:55:35 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:55:35.837759 amazon-ssm-agent[2184]: 2026-01-23 23:55:35 INFO [CredentialRefresher] Next credential rotation will be in 30.691641439466668 minutes Jan 23 23:55:35.934278 systemd[2247]: Queued start job for default target default.target. Jan 23 23:55:35.941867 systemd[2247]: Created slice app.slice - User Application Slice. Jan 23 23:55:35.941938 systemd[2247]: Reached target paths.target - Paths. Jan 23 23:55:35.941971 systemd[2247]: Reached target timers.target - Timers. Jan 23 23:55:35.947586 systemd[2247]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:55:35.970883 systemd[2247]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:55:35.971418 systemd[2247]: Reached target sockets.target - Sockets. Jan 23 23:55:35.971622 systemd[2247]: Reached target basic.target - Basic System. Jan 23 23:55:35.971839 systemd[2247]: Reached target default.target - Main User Target. Jan 23 23:55:35.971927 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:55:35.975375 systemd[2247]: Startup finished in 285ms. Jan 23 23:55:35.978698 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:55:36.234615 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:36.238154 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:55:36.244478 systemd[1]: Startup finished in 1.178s (kernel) + 8.954s (initrd) + 9.523s (userspace) = 19.656s. Jan 23 23:55:36.253922 (kubelet)[2263]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:36.384771 systemd[1]: Started sshd@1-172.31.28.204:22-4.153.228.146:56846.service - OpenSSH per-connection server daemon (4.153.228.146:56846). Jan 23 23:55:36.817389 amazon-ssm-agent[2184]: 2026-01-23 23:55:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:55:36.916345 sshd[2269]: Accepted publickey for core from 4.153.228.146 port 56846 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:36.922644 amazon-ssm-agent[2184]: 2026-01-23 23:55:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2276) started Jan 23 23:55:36.922976 sshd[2269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:36.941954 systemd-logind[2010]: New session 2 of user core. Jan 23 23:55:36.948268 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:55:37.023694 amazon-ssm-agent[2184]: 2026-01-23 23:55:36 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:55:37.297691 sshd[2269]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:37.316741 systemd[1]: sshd@1-172.31.28.204:22-4.153.228.146:56846.service: Deactivated successfully. Jan 23 23:55:37.320778 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:55:37.327618 systemd-logind[2010]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:55:37.331953 systemd-logind[2010]: Removed session 2. Jan 23 23:55:37.385876 systemd[1]: Started sshd@2-172.31.28.204:22-4.153.228.146:56856.service - OpenSSH per-connection server daemon (4.153.228.146:56856). Jan 23 23:55:37.561118 kubelet[2263]: E0123 23:55:37.560941 2263 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:37.567717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:37.568074 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:37.569858 systemd[1]: kubelet.service: Consumed 1.409s CPU time. Jan 23 23:55:37.897341 sshd[2291]: Accepted publickey for core from 4.153.228.146 port 56856 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:37.899873 sshd[2291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:37.909650 systemd-logind[2010]: New session 3 of user core. Jan 23 23:55:37.921576 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:55:38.248901 sshd[2291]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:38.254848 systemd-logind[2010]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:55:38.255992 systemd[1]: sshd@2-172.31.28.204:22-4.153.228.146:56856.service: Deactivated successfully. Jan 23 23:55:38.259489 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:55:38.262830 systemd-logind[2010]: Removed session 3. Jan 23 23:55:38.357801 systemd[1]: Started sshd@3-172.31.28.204:22-4.153.228.146:56864.service - OpenSSH per-connection server daemon (4.153.228.146:56864). Jan 23 23:55:38.903396 sshd[2300]: Accepted publickey for core from 4.153.228.146 port 56864 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:38.905953 sshd[2300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:38.913373 systemd-logind[2010]: New session 4 of user core. Jan 23 23:55:38.924586 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:55:39.288747 sshd[2300]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:39.294405 systemd[1]: sshd@3-172.31.28.204:22-4.153.228.146:56864.service: Deactivated successfully. Jan 23 23:55:39.297465 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:55:39.301094 systemd-logind[2010]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:55:39.303071 systemd-logind[2010]: Removed session 4. Jan 23 23:55:39.694799 systemd-resolved[1942]: Clock change detected. Flushing caches. Jan 23 23:55:39.717874 systemd[1]: Started sshd@4-172.31.28.204:22-4.153.228.146:56878.service - OpenSSH per-connection server daemon (4.153.228.146:56878). Jan 23 23:55:40.256953 sshd[2307]: Accepted publickey for core from 4.153.228.146 port 56878 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:40.259620 sshd[2307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:40.267021 systemd-logind[2010]: New session 5 of user core. Jan 23 23:55:40.279572 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:55:40.572524 sudo[2310]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:55:40.573197 sudo[2310]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:40.590623 sudo[2310]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:40.674625 sshd[2307]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:40.681543 systemd[1]: sshd@4-172.31.28.204:22-4.153.228.146:56878.service: Deactivated successfully. Jan 23 23:55:40.684637 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:55:40.686168 systemd-logind[2010]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:55:40.688926 systemd-logind[2010]: Removed session 5. Jan 23 23:55:40.767787 systemd[1]: Started sshd@5-172.31.28.204:22-4.153.228.146:56882.service - OpenSSH per-connection server daemon (4.153.228.146:56882). Jan 23 23:55:41.256563 sshd[2315]: Accepted publickey for core from 4.153.228.146 port 56882 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:41.259226 sshd[2315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:41.266662 systemd-logind[2010]: New session 6 of user core. Jan 23 23:55:41.278618 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:55:41.534615 sudo[2320]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:55:41.536045 sudo[2320]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:41.542129 sudo[2320]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:41.552131 sudo[2319]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:55:41.552778 sudo[2319]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:41.574473 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:41.590222 auditctl[2323]: No rules Jan 23 23:55:41.591059 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:55:41.591459 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:41.603990 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:41.647419 augenrules[2341]: No rules Jan 23 23:55:41.649658 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:41.652118 sudo[2319]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:41.729666 sshd[2315]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:41.735196 systemd-logind[2010]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:55:41.735597 systemd[1]: sshd@5-172.31.28.204:22-4.153.228.146:56882.service: Deactivated successfully. Jan 23 23:55:41.739581 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:55:41.744942 systemd-logind[2010]: Removed session 6. Jan 23 23:55:41.816560 systemd[1]: Started sshd@6-172.31.28.204:22-4.153.228.146:56896.service - OpenSSH per-connection server daemon (4.153.228.146:56896). Jan 23 23:55:42.318223 sshd[2349]: Accepted publickey for core from 4.153.228.146 port 56896 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:42.320731 sshd[2349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:42.330589 systemd-logind[2010]: New session 7 of user core. Jan 23 23:55:42.333625 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:55:42.595595 sudo[2352]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:55:42.596219 sudo[2352]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:43.218183 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:55:43.220269 (dockerd)[2367]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:55:43.732628 dockerd[2367]: time="2026-01-23T23:55:43.732528388Z" level=info msg="Starting up" Jan 23 23:55:43.999650 systemd[1]: var-lib-docker-metacopy\x2dcheck1961097322-merged.mount: Deactivated successfully. Jan 23 23:55:44.011175 dockerd[2367]: time="2026-01-23T23:55:44.011113273Z" level=info msg="Loading containers: start." Jan 23 23:55:44.226347 kernel: Initializing XFRM netlink socket Jan 23 23:55:44.294801 (udev-worker)[2390]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:44.391561 systemd-networkd[1938]: docker0: Link UP Jan 23 23:55:44.415887 dockerd[2367]: time="2026-01-23T23:55:44.415740903Z" level=info msg="Loading containers: done." Jan 23 23:55:44.441691 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1425249245-merged.mount: Deactivated successfully. Jan 23 23:55:44.453682 dockerd[2367]: time="2026-01-23T23:55:44.453607167Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:55:44.453954 dockerd[2367]: time="2026-01-23T23:55:44.453757755Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:55:44.454053 dockerd[2367]: time="2026-01-23T23:55:44.453983283Z" level=info msg="Daemon has completed initialization" Jan 23 23:55:44.511942 dockerd[2367]: time="2026-01-23T23:55:44.511865043Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:55:44.512442 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:55:45.966347 containerd[2024]: time="2026-01-23T23:55:45.965959063Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 23 23:55:46.605495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821740312.mount: Deactivated successfully. Jan 23 23:55:48.040531 containerd[2024]: time="2026-01-23T23:55:48.040449149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:48.042748 containerd[2024]: time="2026-01-23T23:55:48.042688637Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=27387281" Jan 23 23:55:48.044394 containerd[2024]: time="2026-01-23T23:55:48.044337257Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:48.051352 containerd[2024]: time="2026-01-23T23:55:48.050707829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:48.054020 containerd[2024]: time="2026-01-23T23:55:48.053330945Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 2.087284678s" Jan 23 23:55:48.054020 containerd[2024]: time="2026-01-23T23:55:48.053398685Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 23 23:55:48.055982 containerd[2024]: time="2026-01-23T23:55:48.055857005Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 23 23:55:48.142479 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:48.151687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:48.511196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:48.523874 (kubelet)[2574]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:48.595590 kubelet[2574]: E0123 23:55:48.594972 2574 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:48.605065 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:48.605855 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:49.720935 containerd[2024]: time="2026-01-23T23:55:49.720874401Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.723401 containerd[2024]: time="2026-01-23T23:55:49.723334317Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23553081" Jan 23 23:55:49.723916 containerd[2024]: time="2026-01-23T23:55:49.723878661Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.729874 containerd[2024]: time="2026-01-23T23:55:49.729811437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.732329 containerd[2024]: time="2026-01-23T23:55:49.732256953Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.676074628s" Jan 23 23:55:49.732860 containerd[2024]: time="2026-01-23T23:55:49.732476541Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 23 23:55:49.733614 containerd[2024]: time="2026-01-23T23:55:49.733177473Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 23 23:55:51.007923 containerd[2024]: time="2026-01-23T23:55:51.006760964Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18298067" Jan 23 23:55:51.007923 containerd[2024]: time="2026-01-23T23:55:51.006853124Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.013067 containerd[2024]: time="2026-01-23T23:55:51.013001816Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:51.015784 containerd[2024]: time="2026-01-23T23:55:51.015719156Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.282491019s" Jan 23 23:55:51.015944 containerd[2024]: time="2026-01-23T23:55:51.015780044Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 23 23:55:51.017744 containerd[2024]: time="2026-01-23T23:55:51.017697644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 23 23:55:51.018087 containerd[2024]: time="2026-01-23T23:55:51.018031544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:52.314330 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3911229832.mount: Deactivated successfully. Jan 23 23:55:52.861385 containerd[2024]: time="2026-01-23T23:55:52.860239597Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:52.862539 containerd[2024]: time="2026-01-23T23:55:52.862484245Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=28258673" Jan 23 23:55:52.863898 containerd[2024]: time="2026-01-23T23:55:52.863822029Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:52.867492 containerd[2024]: time="2026-01-23T23:55:52.867427441Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:52.870003 containerd[2024]: time="2026-01-23T23:55:52.868977757Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.851061617s" Jan 23 23:55:52.870003 containerd[2024]: time="2026-01-23T23:55:52.869037385Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 23 23:55:52.870003 containerd[2024]: time="2026-01-23T23:55:52.869735713Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 23 23:55:53.395491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3644890112.mount: Deactivated successfully. Jan 23 23:55:54.586918 containerd[2024]: time="2026-01-23T23:55:54.585721993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:54.588020 containerd[2024]: time="2026-01-23T23:55:54.587960293Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152117" Jan 23 23:55:54.588215 containerd[2024]: time="2026-01-23T23:55:54.588163621Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:54.594578 containerd[2024]: time="2026-01-23T23:55:54.594505790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:54.597149 containerd[2024]: time="2026-01-23T23:55:54.597096158Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.727312301s" Jan 23 23:55:54.597459 containerd[2024]: time="2026-01-23T23:55:54.597289166Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 23 23:55:54.598967 containerd[2024]: time="2026-01-23T23:55:54.598227338Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:55:55.057677 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount873835737.mount: Deactivated successfully. Jan 23 23:55:55.064784 containerd[2024]: time="2026-01-23T23:55:55.064704792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.066859 containerd[2024]: time="2026-01-23T23:55:55.066485028Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:55:55.068347 containerd[2024]: time="2026-01-23T23:55:55.068014080Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.073355 containerd[2024]: time="2026-01-23T23:55:55.072291528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.074352 containerd[2024]: time="2026-01-23T23:55:55.074105460Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.81523ms" Jan 23 23:55:55.074352 containerd[2024]: time="2026-01-23T23:55:55.074166252Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:55:55.075354 containerd[2024]: time="2026-01-23T23:55:55.074818836Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 23 23:55:55.597016 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1349263077.mount: Deactivated successfully. Jan 23 23:55:58.205193 containerd[2024]: time="2026-01-23T23:55:58.204373731Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:58.206803 containerd[2024]: time="2026-01-23T23:55:58.206731467Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=70013651" Jan 23 23:55:58.209029 containerd[2024]: time="2026-01-23T23:55:58.208966239Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:58.217744 containerd[2024]: time="2026-01-23T23:55:58.217622812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:58.220520 containerd[2024]: time="2026-01-23T23:55:58.220277548Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.145399012s" Jan 23 23:55:58.220520 containerd[2024]: time="2026-01-23T23:55:58.220365004Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 23 23:55:58.764915 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:55:58.771865 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:59.179736 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:59.190046 (kubelet)[2736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:59.263801 kubelet[2736]: E0123 23:55:59.263742 2736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:59.267534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:59.268037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:56:03.544057 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:56:04.636079 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:04.648840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:04.711095 systemd[1]: Reloading requested from client PID 2753 ('systemctl') (unit session-7.scope)... Jan 23 23:56:04.711129 systemd[1]: Reloading... Jan 23 23:56:04.963376 zram_generator::config[2796]: No configuration found. Jan 23 23:56:05.199729 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:05.373563 systemd[1]: Reloading finished in 661 ms. Jan 23 23:56:05.467500 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:56:05.467684 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:56:05.469421 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:05.478071 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:05.798385 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:05.814142 (kubelet)[2856]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:05.886385 kubelet[2856]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:05.886385 kubelet[2856]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:05.886385 kubelet[2856]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:05.886385 kubelet[2856]: I0123 23:56:05.886282 2856 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:06.394365 kubelet[2856]: I0123 23:56:06.393694 2856 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:56:06.394365 kubelet[2856]: I0123 23:56:06.393738 2856 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:06.394365 kubelet[2856]: I0123 23:56:06.394104 2856 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:56:06.442461 kubelet[2856]: E0123 23:56:06.442409 2856 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:56:06.443742 kubelet[2856]: I0123 23:56:06.443710 2856 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:06.455457 kubelet[2856]: E0123 23:56:06.455392 2856 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:06.455457 kubelet[2856]: I0123 23:56:06.455447 2856 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:06.460512 kubelet[2856]: I0123 23:56:06.460461 2856 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:06.461138 kubelet[2856]: I0123 23:56:06.461087 2856 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:06.461446 kubelet[2856]: I0123 23:56:06.461139 2856 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:06.461618 kubelet[2856]: I0123 23:56:06.461575 2856 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:06.461618 kubelet[2856]: I0123 23:56:06.461599 2856 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:56:06.461976 kubelet[2856]: I0123 23:56:06.461947 2856 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:06.468121 kubelet[2856]: I0123 23:56:06.468058 2856 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:56:06.468121 kubelet[2856]: I0123 23:56:06.468109 2856 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:06.468287 kubelet[2856]: I0123 23:56:06.468166 2856 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:56:06.470495 kubelet[2856]: I0123 23:56:06.470454 2856 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:06.475040 kubelet[2856]: E0123 23:56:06.474990 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-204&limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:56:06.475748 kubelet[2856]: I0123 23:56:06.475718 2856 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:06.477100 kubelet[2856]: I0123 23:56:06.477068 2856 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:56:06.478473 kubelet[2856]: W0123 23:56:06.477446 2856 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:56:06.484095 kubelet[2856]: I0123 23:56:06.484065 2856 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:06.484425 kubelet[2856]: I0123 23:56:06.484402 2856 server.go:1289] "Started kubelet" Jan 23 23:56:06.494729 kubelet[2856]: I0123 23:56:06.494686 2856 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:06.501080 kubelet[2856]: E0123 23:56:06.498914 2856 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.28.204:6443/api/v1/namespaces/default/events\": dial tcp 172.31.28.204:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-28-204.188d81749d744b11 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-28-204,UID:ip-172-31-28-204,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-28-204,},FirstTimestamp:2026-01-23 23:56:06.484306705 +0000 UTC m=+0.662868029,LastTimestamp:2026-01-23 23:56:06.484306705 +0000 UTC m=+0.662868029,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-28-204,}" Jan 23 23:56:06.502548 kubelet[2856]: E0123 23:56:06.502476 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:56:06.503973 kubelet[2856]: I0123 23:56:06.503943 2856 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:06.504622 kubelet[2856]: E0123 23:56:06.504585 2856 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-204\" not found" Jan 23 23:56:06.509127 kubelet[2856]: I0123 23:56:06.508570 2856 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:06.511958 kubelet[2856]: I0123 23:56:06.511880 2856 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:06.513737 kubelet[2856]: I0123 23:56:06.513685 2856 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:56:06.517278 kubelet[2856]: E0123 23:56:06.515586 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="200ms" Jan 23 23:56:06.517278 kubelet[2856]: I0123 23:56:06.516206 2856 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:06.518206 kubelet[2856]: E0123 23:56:06.518118 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:56:06.518454 kubelet[2856]: I0123 23:56:06.503932 2856 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:06.518902 kubelet[2856]: I0123 23:56:06.518873 2856 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:06.519612 kubelet[2856]: I0123 23:56:06.519586 2856 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:06.521999 kubelet[2856]: I0123 23:56:06.521958 2856 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:56:06.522213 kubelet[2856]: I0123 23:56:06.522189 2856 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:56:06.522575 kubelet[2856]: I0123 23:56:06.522537 2856 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:06.547417 kubelet[2856]: I0123 23:56:06.546983 2856 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:06.550855 kubelet[2856]: I0123 23:56:06.550785 2856 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:06.550855 kubelet[2856]: I0123 23:56:06.550832 2856 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:56:06.551042 kubelet[2856]: I0123 23:56:06.550867 2856 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:06.551042 kubelet[2856]: I0123 23:56:06.550884 2856 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:56:06.551042 kubelet[2856]: E0123 23:56:06.550954 2856 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:06.558152 kubelet[2856]: E0123 23:56:06.558105 2856 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:56:06.563707 kubelet[2856]: E0123 23:56:06.563649 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:56:06.574187 kubelet[2856]: I0123 23:56:06.574123 2856 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:06.574605 kubelet[2856]: I0123 23:56:06.574151 2856 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:06.574605 kubelet[2856]: I0123 23:56:06.574433 2856 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:06.577876 kubelet[2856]: I0123 23:56:06.577483 2856 policy_none.go:49] "None policy: Start" Jan 23 23:56:06.577876 kubelet[2856]: I0123 23:56:06.577519 2856 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:06.577876 kubelet[2856]: I0123 23:56:06.577542 2856 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:06.587424 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:56:06.605751 kubelet[2856]: E0123 23:56:06.604845 2856 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-204\" not found" Jan 23 23:56:06.605498 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:56:06.623239 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:56:06.626844 kubelet[2856]: E0123 23:56:06.626490 2856 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:56:06.627583 kubelet[2856]: I0123 23:56:06.627116 2856 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:06.627583 kubelet[2856]: I0123 23:56:06.627368 2856 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:06.630015 kubelet[2856]: I0123 23:56:06.628063 2856 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:06.630864 kubelet[2856]: E0123 23:56:06.630830 2856 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:06.631072 kubelet[2856]: E0123 23:56:06.631041 2856 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-28-204\" not found" Jan 23 23:56:06.674628 systemd[1]: Created slice kubepods-burstable-pod2ef4fd9c522580e51847e531a406a31c.slice - libcontainer container kubepods-burstable-pod2ef4fd9c522580e51847e531a406a31c.slice. Jan 23 23:56:06.688986 kubelet[2856]: E0123 23:56:06.688935 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:06.694696 systemd[1]: Created slice kubepods-burstable-pode45031d4cdc00a04273b83151faa2473.slice - libcontainer container kubepods-burstable-pode45031d4cdc00a04273b83151faa2473.slice. Jan 23 23:56:06.699801 kubelet[2856]: E0123 23:56:06.699742 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:06.704035 systemd[1]: Created slice kubepods-burstable-podeb3284e2505de11c19436bea18e1c9fe.slice - libcontainer container kubepods-burstable-podeb3284e2505de11c19436bea18e1c9fe.slice. Jan 23 23:56:06.707976 kubelet[2856]: E0123 23:56:06.707921 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:06.717375 kubelet[2856]: E0123 23:56:06.717275 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="400ms" Jan 23 23:56:06.720481 kubelet[2856]: I0123 23:56:06.720438 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:06.720772 kubelet[2856]: I0123 23:56:06.720645 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:06.720772 kubelet[2856]: I0123 23:56:06.720747 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:06.721046 kubelet[2856]: I0123 23:56:06.720790 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:06.721046 kubelet[2856]: I0123 23:56:06.720839 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:06.721046 kubelet[2856]: I0123 23:56:06.720880 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:06.721046 kubelet[2856]: I0123 23:56:06.720917 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3284e2505de11c19436bea18e1c9fe-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-204\" (UID: \"eb3284e2505de11c19436bea18e1c9fe\") " pod="kube-system/kube-scheduler-ip-172-31-28-204" Jan 23 23:56:06.721046 kubelet[2856]: I0123 23:56:06.720950 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:06.721306 kubelet[2856]: I0123 23:56:06.720983 2856 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:06.730729 kubelet[2856]: I0123 23:56:06.730666 2856 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:06.731568 kubelet[2856]: E0123 23:56:06.731500 2856 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.204:6443/api/v1/nodes\": dial tcp 172.31.28.204:6443: connect: connection refused" node="ip-172-31-28-204" Jan 23 23:56:06.933930 kubelet[2856]: I0123 23:56:06.933785 2856 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:06.935548 kubelet[2856]: E0123 23:56:06.935130 2856 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.204:6443/api/v1/nodes\": dial tcp 172.31.28.204:6443: connect: connection refused" node="ip-172-31-28-204" Jan 23 23:56:06.990867 containerd[2024]: time="2026-01-23T23:56:06.990804207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-204,Uid:2ef4fd9c522580e51847e531a406a31c,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:07.001930 containerd[2024]: time="2026-01-23T23:56:07.001795019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-204,Uid:e45031d4cdc00a04273b83151faa2473,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:07.010367 containerd[2024]: time="2026-01-23T23:56:07.010069487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-204,Uid:eb3284e2505de11c19436bea18e1c9fe,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:07.118198 kubelet[2856]: E0123 23:56:07.118117 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="800ms" Jan 23 23:56:07.338029 kubelet[2856]: I0123 23:56:07.337877 2856 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:07.338804 kubelet[2856]: E0123 23:56:07.338745 2856 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.204:6443/api/v1/nodes\": dial tcp 172.31.28.204:6443: connect: connection refused" node="ip-172-31-28-204" Jan 23 23:56:07.345423 kubelet[2856]: E0123 23:56:07.345365 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.28.204:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:56:07.438348 kubelet[2856]: E0123 23:56:07.437028 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.28.204:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:56:07.449480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500471596.mount: Deactivated successfully. Jan 23 23:56:07.459175 containerd[2024]: time="2026-01-23T23:56:07.459119977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.460711 containerd[2024]: time="2026-01-23T23:56:07.460475041Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:56:07.462740 containerd[2024]: time="2026-01-23T23:56:07.462612913Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.466342 containerd[2024]: time="2026-01-23T23:56:07.465461437Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.466899 containerd[2024]: time="2026-01-23T23:56:07.466845205Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:07.467340 containerd[2024]: time="2026-01-23T23:56:07.467277013Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.467625 containerd[2024]: time="2026-01-23T23:56:07.467595001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:56:07.472896 containerd[2024]: time="2026-01-23T23:56:07.472837129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:56:07.477282 containerd[2024]: time="2026-01-23T23:56:07.477225721Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 475.317458ms" Jan 23 23:56:07.484790 containerd[2024]: time="2026-01-23T23:56:07.484623542Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 493.707051ms" Jan 23 23:56:07.497652 containerd[2024]: time="2026-01-23T23:56:07.497251478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 487.070643ms" Jan 23 23:56:07.523797 kubelet[2856]: E0123 23:56:07.523697 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.28.204:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-28-204&limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:56:07.821901 containerd[2024]: time="2026-01-23T23:56:07.821720007Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.821901 containerd[2024]: time="2026-01-23T23:56:07.821841639Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.824094 containerd[2024]: time="2026-01-23T23:56:07.822824931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.825128 containerd[2024]: time="2026-01-23T23:56:07.824973351Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.826171 containerd[2024]: time="2026-01-23T23:56:07.825631935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.826171 containerd[2024]: time="2026-01-23T23:56:07.825728799Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.826171 containerd[2024]: time="2026-01-23T23:56:07.825766767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.826171 containerd[2024]: time="2026-01-23T23:56:07.825949119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.828955 containerd[2024]: time="2026-01-23T23:56:07.828793419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:07.828955 containerd[2024]: time="2026-01-23T23:56:07.828902019Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:07.832948 containerd[2024]: time="2026-01-23T23:56:07.831930663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.833402 containerd[2024]: time="2026-01-23T23:56:07.833265939Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:07.873726 systemd[1]: Started cri-containerd-d474f23380dc288b8e4555ff677365653647310436c2bb836bf4e46e7f669798.scope - libcontainer container d474f23380dc288b8e4555ff677365653647310436c2bb836bf4e46e7f669798. Jan 23 23:56:07.885418 systemd[1]: Started cri-containerd-8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c.scope - libcontainer container 8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c. Jan 23 23:56:07.904654 systemd[1]: Started cri-containerd-b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f.scope - libcontainer container b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f. Jan 23 23:56:07.919123 kubelet[2856]: E0123 23:56:07.919038 2856 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": dial tcp 172.31.28.204:6443: connect: connection refused" interval="1.6s" Jan 23 23:56:07.922283 kubelet[2856]: E0123 23:56:07.922201 2856 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.28.204:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:56:08.019807 containerd[2024]: time="2026-01-23T23:56:08.019616088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-28-204,Uid:eb3284e2505de11c19436bea18e1c9fe,Namespace:kube-system,Attempt:0,} returns sandbox id \"8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c\"" Jan 23 23:56:08.033297 containerd[2024]: time="2026-01-23T23:56:08.033028608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-28-204,Uid:2ef4fd9c522580e51847e531a406a31c,Namespace:kube-system,Attempt:0,} returns sandbox id \"d474f23380dc288b8e4555ff677365653647310436c2bb836bf4e46e7f669798\"" Jan 23 23:56:08.034451 containerd[2024]: time="2026-01-23T23:56:08.034296096Z" level=info msg="CreateContainer within sandbox \"8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:56:08.036293 containerd[2024]: time="2026-01-23T23:56:08.036242280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-28-204,Uid:e45031d4cdc00a04273b83151faa2473,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f\"" Jan 23 23:56:08.047833 containerd[2024]: time="2026-01-23T23:56:08.047780388Z" level=info msg="CreateContainer within sandbox \"d474f23380dc288b8e4555ff677365653647310436c2bb836bf4e46e7f669798\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:56:08.051029 containerd[2024]: time="2026-01-23T23:56:08.050977848Z" level=info msg="CreateContainer within sandbox \"b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:56:08.062435 containerd[2024]: time="2026-01-23T23:56:08.062357580Z" level=info msg="CreateContainer within sandbox \"8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed\"" Jan 23 23:56:08.064090 containerd[2024]: time="2026-01-23T23:56:08.063861444Z" level=info msg="StartContainer for \"70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed\"" Jan 23 23:56:08.074550 containerd[2024]: time="2026-01-23T23:56:08.073130112Z" level=info msg="CreateContainer within sandbox \"d474f23380dc288b8e4555ff677365653647310436c2bb836bf4e46e7f669798\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aeddac736b446909ba6fb5bd8cfcdcf1f12a9e3b3412f7cc20a37bb29d89c14c\"" Jan 23 23:56:08.075849 containerd[2024]: time="2026-01-23T23:56:08.075773328Z" level=info msg="StartContainer for \"aeddac736b446909ba6fb5bd8cfcdcf1f12a9e3b3412f7cc20a37bb29d89c14c\"" Jan 23 23:56:08.080411 containerd[2024]: time="2026-01-23T23:56:08.080224344Z" level=info msg="CreateContainer within sandbox \"b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb\"" Jan 23 23:56:08.083617 containerd[2024]: time="2026-01-23T23:56:08.083519149Z" level=info msg="StartContainer for \"1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb\"" Jan 23 23:56:08.143180 kubelet[2856]: I0123 23:56:08.143140 2856 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:08.151106 kubelet[2856]: E0123 23:56:08.144043 2856 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.28.204:6443/api/v1/nodes\": dial tcp 172.31.28.204:6443: connect: connection refused" node="ip-172-31-28-204" Jan 23 23:56:08.148776 systemd[1]: Started cri-containerd-70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed.scope - libcontainer container 70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed. Jan 23 23:56:08.161904 systemd[1]: Started cri-containerd-aeddac736b446909ba6fb5bd8cfcdcf1f12a9e3b3412f7cc20a37bb29d89c14c.scope - libcontainer container aeddac736b446909ba6fb5bd8cfcdcf1f12a9e3b3412f7cc20a37bb29d89c14c. Jan 23 23:56:08.172725 systemd[1]: Started cri-containerd-1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb.scope - libcontainer container 1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb. Jan 23 23:56:08.292803 containerd[2024]: time="2026-01-23T23:56:08.292557206Z" level=info msg="StartContainer for \"70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed\" returns successfully" Jan 23 23:56:08.310107 containerd[2024]: time="2026-01-23T23:56:08.309823082Z" level=info msg="StartContainer for \"aeddac736b446909ba6fb5bd8cfcdcf1f12a9e3b3412f7cc20a37bb29d89c14c\" returns successfully" Jan 23 23:56:08.324345 containerd[2024]: time="2026-01-23T23:56:08.324168926Z" level=info msg="StartContainer for \"1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb\" returns successfully" Jan 23 23:56:08.543073 kubelet[2856]: E0123 23:56:08.541453 2856 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.28.204:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.28.204:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:56:08.591705 kubelet[2856]: E0123 23:56:08.590863 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:08.604133 kubelet[2856]: E0123 23:56:08.604078 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:08.610051 kubelet[2856]: E0123 23:56:08.609986 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:09.613356 kubelet[2856]: E0123 23:56:09.611930 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:09.613356 kubelet[2856]: E0123 23:56:09.612017 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:09.748184 kubelet[2856]: I0123 23:56:09.748104 2856 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:10.615748 kubelet[2856]: E0123 23:56:10.615696 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:10.616716 kubelet[2856]: E0123 23:56:10.616672 2856 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:11.887266 kubelet[2856]: E0123 23:56:11.887185 2856 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-28-204\" not found" node="ip-172-31-28-204" Jan 23 23:56:12.098622 kubelet[2856]: I0123 23:56:12.097654 2856 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-204" Jan 23 23:56:12.098622 kubelet[2856]: E0123 23:56:12.097714 2856 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ip-172-31-28-204\": node \"ip-172-31-28-204\" not found" Jan 23 23:56:12.106061 kubelet[2856]: I0123 23:56:12.106005 2856 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:12.137116 kubelet[2856]: E0123 23:56:12.136758 2856 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:12.137116 kubelet[2856]: I0123 23:56:12.136803 2856 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:12.141955 kubelet[2856]: E0123 23:56:12.140418 2856 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-28-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:12.141955 kubelet[2856]: I0123 23:56:12.140500 2856 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-204" Jan 23 23:56:12.144919 kubelet[2856]: E0123 23:56:12.144865 2856 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-28-204\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-28-204" Jan 23 23:56:12.491778 kubelet[2856]: I0123 23:56:12.491143 2856 apiserver.go:52] "Watching apiserver" Jan 23 23:56:12.517112 kubelet[2856]: I0123 23:56:12.517071 2856 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:14.320959 systemd[1]: Reloading requested from client PID 3140 ('systemctl') (unit session-7.scope)... Jan 23 23:56:14.320993 systemd[1]: Reloading... Jan 23 23:56:14.523402 zram_generator::config[3183]: No configuration found. Jan 23 23:56:14.812033 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:56:15.044207 systemd[1]: Reloading finished in 722 ms. Jan 23 23:56:15.128499 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:15.147477 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:56:15.148118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:15.148188 systemd[1]: kubelet.service: Consumed 1.389s CPU time, 125.1M memory peak, 0B memory swap peak. Jan 23 23:56:15.158918 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:56:15.509399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:56:15.527202 (kubelet)[3241]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:56:15.635471 kubelet[3241]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:15.635992 kubelet[3241]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:56:15.636089 kubelet[3241]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:56:15.636914 kubelet[3241]: I0123 23:56:15.636825 3241 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:56:15.663194 kubelet[3241]: I0123 23:56:15.663115 3241 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 23 23:56:15.663470 kubelet[3241]: I0123 23:56:15.663448 3241 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:56:15.665344 kubelet[3241]: I0123 23:56:15.663973 3241 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:56:15.666764 kubelet[3241]: I0123 23:56:15.666729 3241 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:56:15.672636 kubelet[3241]: I0123 23:56:15.672596 3241 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:56:15.681123 kubelet[3241]: E0123 23:56:15.681074 3241 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:56:15.681352 kubelet[3241]: I0123 23:56:15.681304 3241 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:56:15.686516 kubelet[3241]: I0123 23:56:15.686455 3241 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:56:15.688445 kubelet[3241]: I0123 23:56:15.688374 3241 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:56:15.690796 kubelet[3241]: I0123 23:56:15.688624 3241 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-28-204","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:56:15.691484 kubelet[3241]: I0123 23:56:15.691419 3241 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:56:15.691484 kubelet[3241]: I0123 23:56:15.691464 3241 container_manager_linux.go:303] "Creating device plugin manager" Jan 23 23:56:15.691648 kubelet[3241]: I0123 23:56:15.691568 3241 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:15.692249 kubelet[3241]: I0123 23:56:15.691888 3241 kubelet.go:480] "Attempting to sync node with API server" Jan 23 23:56:15.692249 kubelet[3241]: I0123 23:56:15.691932 3241 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:56:15.692249 kubelet[3241]: I0123 23:56:15.691983 3241 kubelet.go:386] "Adding apiserver pod source" Jan 23 23:56:15.692249 kubelet[3241]: I0123 23:56:15.692014 3241 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:56:15.702889 kubelet[3241]: I0123 23:56:15.702703 3241 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:56:15.713019 kubelet[3241]: I0123 23:56:15.711214 3241 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:56:15.721019 kubelet[3241]: I0123 23:56:15.720660 3241 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:56:15.721019 kubelet[3241]: I0123 23:56:15.720729 3241 server.go:1289] "Started kubelet" Jan 23 23:56:15.729341 kubelet[3241]: I0123 23:56:15.729209 3241 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:56:15.730174 kubelet[3241]: I0123 23:56:15.730123 3241 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:56:15.730568 kubelet[3241]: I0123 23:56:15.730530 3241 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:56:15.733221 kubelet[3241]: I0123 23:56:15.732527 3241 server.go:317] "Adding debug handlers to kubelet server" Jan 23 23:56:15.733931 kubelet[3241]: I0123 23:56:15.733883 3241 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:56:15.739415 kubelet[3241]: I0123 23:56:15.738249 3241 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:56:15.747280 kubelet[3241]: I0123 23:56:15.745546 3241 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:56:15.747280 kubelet[3241]: E0123 23:56:15.745797 3241 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-28-204\" not found" Jan 23 23:56:15.747280 kubelet[3241]: I0123 23:56:15.746676 3241 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:56:15.747280 kubelet[3241]: I0123 23:56:15.746881 3241 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:56:15.783957 kubelet[3241]: I0123 23:56:15.783839 3241 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:56:15.785538 kubelet[3241]: I0123 23:56:15.785496 3241 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:56:15.797496 kubelet[3241]: I0123 23:56:15.797297 3241 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:56:15.818570 kubelet[3241]: I0123 23:56:15.818406 3241 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 23:56:15.825565 kubelet[3241]: I0123 23:56:15.825046 3241 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 23:56:15.825565 kubelet[3241]: I0123 23:56:15.825564 3241 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 23:56:15.826422 kubelet[3241]: I0123 23:56:15.825648 3241 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:56:15.826422 kubelet[3241]: I0123 23:56:15.825665 3241 kubelet.go:2436] "Starting kubelet main sync loop" Jan 23 23:56:15.826422 kubelet[3241]: E0123 23:56:15.825826 3241 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:56:15.926006 kubelet[3241]: E0123 23:56:15.925919 3241 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:56:15.926952 kubelet[3241]: I0123 23:56:15.926811 3241 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:56:15.926952 kubelet[3241]: I0123 23:56:15.926843 3241 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:56:15.926952 kubelet[3241]: I0123 23:56:15.926880 3241 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:56:15.927189 kubelet[3241]: I0123 23:56:15.927111 3241 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:56:15.927189 kubelet[3241]: I0123 23:56:15.927132 3241 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:56:15.927189 kubelet[3241]: I0123 23:56:15.927164 3241 policy_none.go:49] "None policy: Start" Jan 23 23:56:15.927189 kubelet[3241]: I0123 23:56:15.927182 3241 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:56:15.927429 kubelet[3241]: I0123 23:56:15.927202 3241 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:56:15.927429 kubelet[3241]: I0123 23:56:15.927407 3241 state_mem.go:75] "Updated machine memory state" Jan 23 23:56:15.939354 kubelet[3241]: E0123 23:56:15.938366 3241 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:56:15.939354 kubelet[3241]: I0123 23:56:15.939243 3241 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:56:15.939354 kubelet[3241]: I0123 23:56:15.939265 3241 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:56:15.941871 kubelet[3241]: I0123 23:56:15.941664 3241 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:56:15.945351 kubelet[3241]: E0123 23:56:15.945143 3241 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:56:16.057111 kubelet[3241]: I0123 23:56:16.056965 3241 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-28-204" Jan 23 23:56:16.072714 kubelet[3241]: I0123 23:56:16.071884 3241 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-28-204" Jan 23 23:56:16.072714 kubelet[3241]: I0123 23:56:16.071998 3241 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-28-204" Jan 23 23:56:16.127285 kubelet[3241]: I0123 23:56:16.127248 3241 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.127668 kubelet[3241]: I0123 23:56:16.127628 3241 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.128915 kubelet[3241]: I0123 23:56:16.127364 3241 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-28-204" Jan 23 23:56:16.153118 kubelet[3241]: I0123 23:56:16.152710 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-ca-certs\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.153118 kubelet[3241]: I0123 23:56:16.152777 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.153118 kubelet[3241]: I0123 23:56:16.152822 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-k8s-certs\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.153118 kubelet[3241]: I0123 23:56:16.152876 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-kubeconfig\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.153118 kubelet[3241]: I0123 23:56:16.152932 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e45031d4cdc00a04273b83151faa2473-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-28-204\" (UID: \"e45031d4cdc00a04273b83151faa2473\") " pod="kube-system/kube-controller-manager-ip-172-31-28-204" Jan 23 23:56:16.153529 kubelet[3241]: I0123 23:56:16.152982 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-ca-certs\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.153529 kubelet[3241]: I0123 23:56:16.153040 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-k8s-certs\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.153529 kubelet[3241]: I0123 23:56:16.153077 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb3284e2505de11c19436bea18e1c9fe-kubeconfig\") pod \"kube-scheduler-ip-172-31-28-204\" (UID: \"eb3284e2505de11c19436bea18e1c9fe\") " pod="kube-system/kube-scheduler-ip-172-31-28-204" Jan 23 23:56:16.153529 kubelet[3241]: I0123 23:56:16.153150 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2ef4fd9c522580e51847e531a406a31c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-28-204\" (UID: \"2ef4fd9c522580e51847e531a406a31c\") " pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.712183 kubelet[3241]: I0123 23:56:16.712105 3241 apiserver.go:52] "Watching apiserver" Jan 23 23:56:16.747426 kubelet[3241]: I0123 23:56:16.747291 3241 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:56:16.879381 kubelet[3241]: I0123 23:56:16.879267 3241 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.893793 kubelet[3241]: E0123 23:56:16.893472 3241 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-28-204\" already exists" pod="kube-system/kube-apiserver-ip-172-31-28-204" Jan 23 23:56:16.959552 kubelet[3241]: I0123 23:56:16.959434 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-28-204" podStartSLOduration=0.959412553 podStartE2EDuration="959.412553ms" podCreationTimestamp="2026-01-23 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:16.958716985 +0000 UTC m=+1.421232176" watchObservedRunningTime="2026-01-23 23:56:16.959412553 +0000 UTC m=+1.421927756" Jan 23 23:56:16.961535 kubelet[3241]: I0123 23:56:16.961419 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-28-204" podStartSLOduration=0.961395541 podStartE2EDuration="961.395541ms" podCreationTimestamp="2026-01-23 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:16.931527864 +0000 UTC m=+1.394043091" watchObservedRunningTime="2026-01-23 23:56:16.961395541 +0000 UTC m=+1.423910792" Jan 23 23:56:17.020603 kubelet[3241]: I0123 23:56:17.020349 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-28-204" podStartSLOduration=1.020296905 podStartE2EDuration="1.020296905s" podCreationTimestamp="2026-01-23 23:56:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:16.997951021 +0000 UTC m=+1.460466236" watchObservedRunningTime="2026-01-23 23:56:17.020296905 +0000 UTC m=+1.482812120" Jan 23 23:56:17.914490 update_engine[2011]: I20260123 23:56:17.914382 2011 update_attempter.cc:509] Updating boot flags... Jan 23 23:56:18.012418 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3304) Jan 23 23:56:18.304358 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3307) Jan 23 23:56:18.604372 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3307) Jan 23 23:56:19.586167 kubelet[3241]: I0123 23:56:19.586113 3241 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:56:19.588803 kubelet[3241]: I0123 23:56:19.588025 3241 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:56:19.588897 containerd[2024]: time="2026-01-23T23:56:19.587052650Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:56:20.602253 systemd[1]: Created slice kubepods-besteffort-podd9401e7e_c791_47fe_9460_c229d2920404.slice - libcontainer container kubepods-besteffort-podd9401e7e_c791_47fe_9460_c229d2920404.slice. Jan 23 23:56:20.687872 kubelet[3241]: I0123 23:56:20.687231 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d9401e7e-c791-47fe-9460-c229d2920404-kube-proxy\") pod \"kube-proxy-dvmzb\" (UID: \"d9401e7e-c791-47fe-9460-c229d2920404\") " pod="kube-system/kube-proxy-dvmzb" Jan 23 23:56:20.687872 kubelet[3241]: I0123 23:56:20.687337 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d9401e7e-c791-47fe-9460-c229d2920404-lib-modules\") pod \"kube-proxy-dvmzb\" (UID: \"d9401e7e-c791-47fe-9460-c229d2920404\") " pod="kube-system/kube-proxy-dvmzb" Jan 23 23:56:20.687872 kubelet[3241]: I0123 23:56:20.687383 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d9401e7e-c791-47fe-9460-c229d2920404-xtables-lock\") pod \"kube-proxy-dvmzb\" (UID: \"d9401e7e-c791-47fe-9460-c229d2920404\") " pod="kube-system/kube-proxy-dvmzb" Jan 23 23:56:20.687872 kubelet[3241]: I0123 23:56:20.687420 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmlj\" (UniqueName: \"kubernetes.io/projected/d9401e7e-c791-47fe-9460-c229d2920404-kube-api-access-kvmlj\") pod \"kube-proxy-dvmzb\" (UID: \"d9401e7e-c791-47fe-9460-c229d2920404\") " pod="kube-system/kube-proxy-dvmzb" Jan 23 23:56:20.741056 kubelet[3241]: I0123 23:56:20.740969 3241 status_manager.go:895] "Failed to get status for pod" podUID="18abfb99-8729-47bd-a6b5-01ded22e2bca" pod="tigera-operator/tigera-operator-7dcd859c48-lpm27" err="pods \"tigera-operator-7dcd859c48-lpm27\" is forbidden: User \"system:node:ip-172-31-28-204\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node 'ip-172-31-28-204' and this object" Jan 23 23:56:20.749245 systemd[1]: Created slice kubepods-besteffort-pod18abfb99_8729_47bd_a6b5_01ded22e2bca.slice - libcontainer container kubepods-besteffort-pod18abfb99_8729_47bd_a6b5_01ded22e2bca.slice. Jan 23 23:56:20.788718 kubelet[3241]: I0123 23:56:20.788664 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/18abfb99-8729-47bd-a6b5-01ded22e2bca-var-lib-calico\") pod \"tigera-operator-7dcd859c48-lpm27\" (UID: \"18abfb99-8729-47bd-a6b5-01ded22e2bca\") " pod="tigera-operator/tigera-operator-7dcd859c48-lpm27" Jan 23 23:56:20.789457 kubelet[3241]: I0123 23:56:20.788942 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598hv\" (UniqueName: \"kubernetes.io/projected/18abfb99-8729-47bd-a6b5-01ded22e2bca-kube-api-access-598hv\") pod \"tigera-operator-7dcd859c48-lpm27\" (UID: \"18abfb99-8729-47bd-a6b5-01ded22e2bca\") " pod="tigera-operator/tigera-operator-7dcd859c48-lpm27" Jan 23 23:56:20.925884 containerd[2024]: time="2026-01-23T23:56:20.924415936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvmzb,Uid:d9401e7e-c791-47fe-9460-c229d2920404,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:20.989710 containerd[2024]: time="2026-01-23T23:56:20.989488337Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:20.989710 containerd[2024]: time="2026-01-23T23:56:20.989597105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:20.989710 containerd[2024]: time="2026-01-23T23:56:20.989651225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:20.990152 containerd[2024]: time="2026-01-23T23:56:20.989813585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:21.035655 systemd[1]: Started cri-containerd-541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f.scope - libcontainer container 541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f. Jan 23 23:56:21.056910 containerd[2024]: time="2026-01-23T23:56:21.056819173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lpm27,Uid:18abfb99-8729-47bd-a6b5-01ded22e2bca,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:56:21.084475 containerd[2024]: time="2026-01-23T23:56:21.084399745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dvmzb,Uid:d9401e7e-c791-47fe-9460-c229d2920404,Namespace:kube-system,Attempt:0,} returns sandbox id \"541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f\"" Jan 23 23:56:21.098057 containerd[2024]: time="2026-01-23T23:56:21.097977589Z" level=info msg="CreateContainer within sandbox \"541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:56:21.117099 containerd[2024]: time="2026-01-23T23:56:21.116077465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:21.117099 containerd[2024]: time="2026-01-23T23:56:21.116175577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:21.117099 containerd[2024]: time="2026-01-23T23:56:21.116211613Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:21.117099 containerd[2024]: time="2026-01-23T23:56:21.116439601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:21.140542 containerd[2024]: time="2026-01-23T23:56:21.140448589Z" level=info msg="CreateContainer within sandbox \"541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"43b89c38a81a7a3f9786b026252d602f82dd6c78e32a7d99601abc37ed4014cd\"" Jan 23 23:56:21.142825 containerd[2024]: time="2026-01-23T23:56:21.141865741Z" level=info msg="StartContainer for \"43b89c38a81a7a3f9786b026252d602f82dd6c78e32a7d99601abc37ed4014cd\"" Jan 23 23:56:21.159756 systemd[1]: Started cri-containerd-1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368.scope - libcontainer container 1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368. Jan 23 23:56:21.220700 systemd[1]: Started cri-containerd-43b89c38a81a7a3f9786b026252d602f82dd6c78e32a7d99601abc37ed4014cd.scope - libcontainer container 43b89c38a81a7a3f9786b026252d602f82dd6c78e32a7d99601abc37ed4014cd. Jan 23 23:56:21.253369 containerd[2024]: time="2026-01-23T23:56:21.253056914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-lpm27,Uid:18abfb99-8729-47bd-a6b5-01ded22e2bca,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368\"" Jan 23 23:56:21.262912 containerd[2024]: time="2026-01-23T23:56:21.262262102Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:56:21.300517 containerd[2024]: time="2026-01-23T23:56:21.300428606Z" level=info msg="StartContainer for \"43b89c38a81a7a3f9786b026252d602f82dd6c78e32a7d99601abc37ed4014cd\" returns successfully" Jan 23 23:56:21.826681 systemd[1]: run-containerd-runc-k8s.io-541412b26a0c21f1a18734d51d0f4d961e266799a98c784bab2d420d2467ca6f-runc.DstX4X.mount: Deactivated successfully. Jan 23 23:56:22.809286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount14824945.mount: Deactivated successfully. Jan 23 23:56:23.553932 containerd[2024]: time="2026-01-23T23:56:23.553861457Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:23.559058 containerd[2024]: time="2026-01-23T23:56:23.558997565Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:56:23.562340 containerd[2024]: time="2026-01-23T23:56:23.561700877Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:23.568007 containerd[2024]: time="2026-01-23T23:56:23.567957797Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:23.571566 containerd[2024]: time="2026-01-23T23:56:23.571507673Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.308383863s" Jan 23 23:56:23.571844 containerd[2024]: time="2026-01-23T23:56:23.571811837Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:56:23.582230 containerd[2024]: time="2026-01-23T23:56:23.582176513Z" level=info msg="CreateContainer within sandbox \"1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:56:23.607300 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2312839126.mount: Deactivated successfully. Jan 23 23:56:23.608460 containerd[2024]: time="2026-01-23T23:56:23.608390334Z" level=info msg="CreateContainer within sandbox \"1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0\"" Jan 23 23:56:23.610235 containerd[2024]: time="2026-01-23T23:56:23.610188414Z" level=info msg="StartContainer for \"fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0\"" Jan 23 23:56:23.669617 systemd[1]: Started cri-containerd-fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0.scope - libcontainer container fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0. Jan 23 23:56:23.725000 containerd[2024]: time="2026-01-23T23:56:23.724557930Z" level=info msg="StartContainer for \"fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0\" returns successfully" Jan 23 23:56:23.919423 kubelet[3241]: I0123 23:56:23.919305 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dvmzb" podStartSLOduration=3.919285399 podStartE2EDuration="3.919285399s" podCreationTimestamp="2026-01-23 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:21.918262433 +0000 UTC m=+6.380777648" watchObservedRunningTime="2026-01-23 23:56:23.919285399 +0000 UTC m=+8.381800602" Jan 23 23:56:25.872085 kubelet[3241]: I0123 23:56:25.871721 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-lpm27" podStartSLOduration=3.5575411260000003 podStartE2EDuration="5.871697961s" podCreationTimestamp="2026-01-23 23:56:20 +0000 UTC" firstStartedPulling="2026-01-23 23:56:21.258880706 +0000 UTC m=+5.721395897" lastFinishedPulling="2026-01-23 23:56:23.573037529 +0000 UTC m=+8.035552732" observedRunningTime="2026-01-23 23:56:23.920619043 +0000 UTC m=+8.383134258" watchObservedRunningTime="2026-01-23 23:56:25.871697961 +0000 UTC m=+10.334213176" Jan 23 23:56:32.352047 sudo[2352]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:32.434656 sshd[2349]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:32.440587 systemd-logind[2010]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:32.442302 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:32.444506 systemd[1]: session-7.scope: Consumed 9.888s CPU time, 156.1M memory peak, 0B memory swap peak. Jan 23 23:56:32.445765 systemd[1]: sshd@6-172.31.28.204:22-4.153.228.146:56896.service: Deactivated successfully. Jan 23 23:56:32.456548 systemd-logind[2010]: Removed session 7. Jan 23 23:56:49.496017 systemd[1]: Created slice kubepods-besteffort-pod92561615_cfd3_4463_b7b4_49dbc7fc9586.slice - libcontainer container kubepods-besteffort-pod92561615_cfd3_4463_b7b4_49dbc7fc9586.slice. Jan 23 23:56:49.503306 kubelet[3241]: E0123 23:56:49.500582 3241 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"tigera-ca-bundle\" is forbidden: User \"system:node:ip-172-31-28-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-204' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"tigera-ca-bundle\"" type="*v1.ConfigMap" Jan 23 23:56:49.503306 kubelet[3241]: E0123 23:56:49.500696 3241 reflector.go:200] "Failed to watch" err="failed to list *v1.Secret: secrets \"typha-certs\" is forbidden: User \"system:node:ip-172-31-28-204\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-204' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"typha-certs\"" type="*v1.Secret" Jan 23 23:56:49.503306 kubelet[3241]: E0123 23:56:49.500785 3241 reflector.go:200] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-28-204\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-204' and this object" logger="UnhandledError" reflector="object-\"calico-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 23:56:49.503306 kubelet[3241]: I0123 23:56:49.500841 3241 status_manager.go:895] "Failed to get status for pod" podUID="92561615-cfd3-4463-b7b4-49dbc7fc9586" pod="calico-system/calico-typha-7954d87474-tpkmw" err="pods \"calico-typha-7954d87474-tpkmw\" is forbidden: User \"system:node:ip-172-31-28-204\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-28-204' and this object" Jan 23 23:56:49.584393 kubelet[3241]: I0123 23:56:49.583692 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92561615-cfd3-4463-b7b4-49dbc7fc9586-tigera-ca-bundle\") pod \"calico-typha-7954d87474-tpkmw\" (UID: \"92561615-cfd3-4463-b7b4-49dbc7fc9586\") " pod="calico-system/calico-typha-7954d87474-tpkmw" Jan 23 23:56:49.584393 kubelet[3241]: I0123 23:56:49.583768 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/92561615-cfd3-4463-b7b4-49dbc7fc9586-typha-certs\") pod \"calico-typha-7954d87474-tpkmw\" (UID: \"92561615-cfd3-4463-b7b4-49dbc7fc9586\") " pod="calico-system/calico-typha-7954d87474-tpkmw" Jan 23 23:56:49.584393 kubelet[3241]: I0123 23:56:49.583809 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bzc6k\" (UniqueName: \"kubernetes.io/projected/92561615-cfd3-4463-b7b4-49dbc7fc9586-kube-api-access-bzc6k\") pod \"calico-typha-7954d87474-tpkmw\" (UID: \"92561615-cfd3-4463-b7b4-49dbc7fc9586\") " pod="calico-system/calico-typha-7954d87474-tpkmw" Jan 23 23:56:49.772618 systemd[1]: Created slice kubepods-besteffort-pod555eeb80_0812_4d4a_8e72_46b99352da08.slice - libcontainer container kubepods-besteffort-pod555eeb80_0812_4d4a_8e72_46b99352da08.slice. Jan 23 23:56:49.785528 kubelet[3241]: I0123 23:56:49.785239 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-lib-modules\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785695 kubelet[3241]: I0123 23:56:49.785541 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-var-lib-calico\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785695 kubelet[3241]: I0123 23:56:49.785586 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcnq4\" (UniqueName: \"kubernetes.io/projected/555eeb80-0812-4d4a-8e72-46b99352da08-kube-api-access-mcnq4\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785695 kubelet[3241]: I0123 23:56:49.785627 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-cni-bin-dir\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785695 kubelet[3241]: I0123 23:56:49.785666 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-policysync\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785917 kubelet[3241]: I0123 23:56:49.785703 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-var-run-calico\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785917 kubelet[3241]: I0123 23:56:49.785738 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-flexvol-driver-host\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785917 kubelet[3241]: I0123 23:56:49.785775 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/555eeb80-0812-4d4a-8e72-46b99352da08-tigera-ca-bundle\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785917 kubelet[3241]: I0123 23:56:49.785814 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-cni-log-dir\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.785917 kubelet[3241]: I0123 23:56:49.785852 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-xtables-lock\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.786183 kubelet[3241]: I0123 23:56:49.785890 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/555eeb80-0812-4d4a-8e72-46b99352da08-cni-net-dir\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.786183 kubelet[3241]: I0123 23:56:49.785924 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/555eeb80-0812-4d4a-8e72-46b99352da08-node-certs\") pod \"calico-node-kp558\" (UID: \"555eeb80-0812-4d4a-8e72-46b99352da08\") " pod="calico-system/calico-node-kp558" Jan 23 23:56:49.899802 kubelet[3241]: E0123 23:56:49.899719 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.899802 kubelet[3241]: W0123 23:56:49.899782 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.900578 kubelet[3241]: E0123 23:56:49.899881 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.941237 kubelet[3241]: E0123 23:56:49.940741 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:56:49.971485 kubelet[3241]: E0123 23:56:49.971441 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.971948 kubelet[3241]: W0123 23:56:49.971688 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.971948 kubelet[3241]: E0123 23:56:49.971732 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.972356 kubelet[3241]: E0123 23:56:49.972187 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.972356 kubelet[3241]: W0123 23:56:49.972208 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.972356 kubelet[3241]: E0123 23:56:49.972277 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.973443 kubelet[3241]: E0123 23:56:49.973408 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.973946 kubelet[3241]: W0123 23:56:49.973614 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.973946 kubelet[3241]: E0123 23:56:49.973654 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.975156 kubelet[3241]: E0123 23:56:49.975128 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.976112 kubelet[3241]: W0123 23:56:49.975577 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.976112 kubelet[3241]: E0123 23:56:49.975620 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.978486 kubelet[3241]: E0123 23:56:49.978016 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.978486 kubelet[3241]: W0123 23:56:49.978152 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.978486 kubelet[3241]: E0123 23:56:49.978184 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.980483 kubelet[3241]: E0123 23:56:49.979758 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.980483 kubelet[3241]: W0123 23:56:49.979789 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.980483 kubelet[3241]: E0123 23:56:49.979821 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.982590 kubelet[3241]: E0123 23:56:49.982533 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.982590 kubelet[3241]: W0123 23:56:49.982576 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.982806 kubelet[3241]: E0123 23:56:49.982610 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.983201 kubelet[3241]: E0123 23:56:49.983156 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.983356 kubelet[3241]: W0123 23:56:49.983215 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.983356 kubelet[3241]: E0123 23:56:49.983245 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.984986 kubelet[3241]: E0123 23:56:49.984932 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.985558 kubelet[3241]: W0123 23:56:49.985500 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.985682 kubelet[3241]: E0123 23:56:49.985587 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.986360 kubelet[3241]: E0123 23:56:49.986171 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.986360 kubelet[3241]: W0123 23:56:49.986213 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.986360 kubelet[3241]: E0123 23:56:49.986241 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.988029 kubelet[3241]: E0123 23:56:49.987805 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.988029 kubelet[3241]: W0123 23:56:49.987845 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.988029 kubelet[3241]: E0123 23:56:49.987908 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.989211 kubelet[3241]: E0123 23:56:49.989157 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.989211 kubelet[3241]: W0123 23:56:49.989191 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.989482 kubelet[3241]: E0123 23:56:49.989221 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.991005 kubelet[3241]: E0123 23:56:49.990959 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.991005 kubelet[3241]: W0123 23:56:49.990993 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.991197 kubelet[3241]: E0123 23:56:49.991029 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.991480 kubelet[3241]: E0123 23:56:49.991442 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.991480 kubelet[3241]: W0123 23:56:49.991472 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.991637 kubelet[3241]: E0123 23:56:49.991498 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.992359 kubelet[3241]: E0123 23:56:49.991887 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.992359 kubelet[3241]: W0123 23:56:49.991918 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.992359 kubelet[3241]: E0123 23:56:49.991945 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.993093 kubelet[3241]: E0123 23:56:49.993046 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.993093 kubelet[3241]: W0123 23:56:49.993082 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.993294 kubelet[3241]: E0123 23:56:49.993116 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.994574 kubelet[3241]: E0123 23:56:49.994524 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.994574 kubelet[3241]: W0123 23:56:49.994562 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.994940 kubelet[3241]: E0123 23:56:49.994596 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.995957 kubelet[3241]: E0123 23:56:49.995906 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.995957 kubelet[3241]: W0123 23:56:49.995944 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.996234 kubelet[3241]: E0123 23:56:49.995978 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.997481 kubelet[3241]: E0123 23:56:49.997429 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.997481 kubelet[3241]: W0123 23:56:49.997467 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.997711 kubelet[3241]: E0123 23:56:49.997505 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:49.999698 kubelet[3241]: E0123 23:56:49.999657 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:49.999698 kubelet[3241]: W0123 23:56:49.999694 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:49.999917 kubelet[3241]: E0123 23:56:49.999730 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.000414 kubelet[3241]: E0123 23:56:50.000374 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.000414 kubelet[3241]: W0123 23:56:50.000405 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.000591 kubelet[3241]: E0123 23:56:50.000435 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.000591 kubelet[3241]: I0123 23:56:50.000494 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/46c86ab0-1223-4a22-bfcf-7f463abcf340-kubelet-dir\") pod \"csi-node-driver-rn45p\" (UID: \"46c86ab0-1223-4a22-bfcf-7f463abcf340\") " pod="calico-system/csi-node-driver-rn45p" Jan 23 23:56:50.001666 kubelet[3241]: E0123 23:56:50.001623 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.001666 kubelet[3241]: W0123 23:56:50.001661 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.001832 kubelet[3241]: E0123 23:56:50.001695 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.003235 kubelet[3241]: E0123 23:56:50.003067 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.003235 kubelet[3241]: W0123 23:56:50.003111 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.003235 kubelet[3241]: E0123 23:56:50.003146 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.003699 kubelet[3241]: E0123 23:56:50.003619 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.003699 kubelet[3241]: W0123 23:56:50.003651 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.003699 kubelet[3241]: E0123 23:56:50.003679 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.004003 kubelet[3241]: I0123 23:56:50.003740 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/46c86ab0-1223-4a22-bfcf-7f463abcf340-varrun\") pod \"csi-node-driver-rn45p\" (UID: \"46c86ab0-1223-4a22-bfcf-7f463abcf340\") " pod="calico-system/csi-node-driver-rn45p" Jan 23 23:56:50.004368 kubelet[3241]: E0123 23:56:50.004132 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.004368 kubelet[3241]: W0123 23:56:50.004163 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.004368 kubelet[3241]: E0123 23:56:50.004190 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.004368 kubelet[3241]: I0123 23:56:50.004251 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvhxj\" (UniqueName: \"kubernetes.io/projected/46c86ab0-1223-4a22-bfcf-7f463abcf340-kube-api-access-pvhxj\") pod \"csi-node-driver-rn45p\" (UID: \"46c86ab0-1223-4a22-bfcf-7f463abcf340\") " pod="calico-system/csi-node-driver-rn45p" Jan 23 23:56:50.005010 kubelet[3241]: E0123 23:56:50.004912 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.005010 kubelet[3241]: W0123 23:56:50.004947 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.005010 kubelet[3241]: E0123 23:56:50.004977 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.005457 kubelet[3241]: E0123 23:56:50.005404 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.005457 kubelet[3241]: W0123 23:56:50.005434 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.005629 kubelet[3241]: E0123 23:56:50.005459 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.005886 kubelet[3241]: E0123 23:56:50.005851 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.005886 kubelet[3241]: W0123 23:56:50.005879 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.005886 kubelet[3241]: E0123 23:56:50.005903 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.006370 kubelet[3241]: I0123 23:56:50.005964 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/46c86ab0-1223-4a22-bfcf-7f463abcf340-registration-dir\") pod \"csi-node-driver-rn45p\" (UID: \"46c86ab0-1223-4a22-bfcf-7f463abcf340\") " pod="calico-system/csi-node-driver-rn45p" Jan 23 23:56:50.008043 kubelet[3241]: E0123 23:56:50.007991 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.008043 kubelet[3241]: W0123 23:56:50.008031 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.008256 kubelet[3241]: E0123 23:56:50.008067 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.008697 kubelet[3241]: E0123 23:56:50.008529 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.008697 kubelet[3241]: W0123 23:56:50.008559 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.008697 kubelet[3241]: E0123 23:56:50.008585 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.009391 kubelet[3241]: E0123 23:56:50.008998 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.009391 kubelet[3241]: W0123 23:56:50.009029 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.009391 kubelet[3241]: E0123 23:56:50.009060 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.009391 kubelet[3241]: I0123 23:56:50.009252 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/46c86ab0-1223-4a22-bfcf-7f463abcf340-socket-dir\") pod \"csi-node-driver-rn45p\" (UID: \"46c86ab0-1223-4a22-bfcf-7f463abcf340\") " pod="calico-system/csi-node-driver-rn45p" Jan 23 23:56:50.009674 kubelet[3241]: E0123 23:56:50.009480 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.009674 kubelet[3241]: W0123 23:56:50.009499 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.009674 kubelet[3241]: E0123 23:56:50.009521 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.010282 kubelet[3241]: E0123 23:56:50.010227 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.010282 kubelet[3241]: W0123 23:56:50.010263 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.010709 kubelet[3241]: E0123 23:56:50.010294 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.011487 kubelet[3241]: E0123 23:56:50.011386 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.011487 kubelet[3241]: W0123 23:56:50.011422 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.011487 kubelet[3241]: E0123 23:56:50.011458 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.012857 kubelet[3241]: E0123 23:56:50.012813 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.012857 kubelet[3241]: W0123 23:56:50.012850 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.013085 kubelet[3241]: E0123 23:56:50.012884 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.110695 kubelet[3241]: E0123 23:56:50.110171 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.110695 kubelet[3241]: W0123 23:56:50.110207 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.110695 kubelet[3241]: E0123 23:56:50.110239 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.111626 kubelet[3241]: E0123 23:56:50.111361 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.111626 kubelet[3241]: W0123 23:56:50.111391 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.111626 kubelet[3241]: E0123 23:56:50.111422 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.112061 kubelet[3241]: E0123 23:56:50.112037 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.112247 kubelet[3241]: W0123 23:56:50.112184 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.112247 kubelet[3241]: E0123 23:56:50.112219 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.113034 kubelet[3241]: E0123 23:56:50.112890 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.113034 kubelet[3241]: W0123 23:56:50.112912 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.113034 kubelet[3241]: E0123 23:56:50.112936 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.113943 kubelet[3241]: E0123 23:56:50.113682 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.113943 kubelet[3241]: W0123 23:56:50.113706 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.113943 kubelet[3241]: E0123 23:56:50.113731 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.114461 kubelet[3241]: E0123 23:56:50.114362 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.114461 kubelet[3241]: W0123 23:56:50.114405 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.114461 kubelet[3241]: E0123 23:56:50.114434 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.115243 kubelet[3241]: E0123 23:56:50.115097 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.115243 kubelet[3241]: W0123 23:56:50.115120 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.115243 kubelet[3241]: E0123 23:56:50.115143 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.116033 kubelet[3241]: E0123 23:56:50.115840 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.116033 kubelet[3241]: W0123 23:56:50.115861 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.116033 kubelet[3241]: E0123 23:56:50.115885 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.116594 kubelet[3241]: E0123 23:56:50.116452 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.116594 kubelet[3241]: W0123 23:56:50.116474 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.116594 kubelet[3241]: E0123 23:56:50.116497 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.117174 kubelet[3241]: E0123 23:56:50.117101 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.117174 kubelet[3241]: W0123 23:56:50.117122 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.117174 kubelet[3241]: E0123 23:56:50.117144 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.117951 kubelet[3241]: E0123 23:56:50.117877 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.117951 kubelet[3241]: W0123 23:56:50.117902 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.117951 kubelet[3241]: E0123 23:56:50.117926 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.118822 kubelet[3241]: E0123 23:56:50.118692 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.118822 kubelet[3241]: W0123 23:56:50.118715 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.118822 kubelet[3241]: E0123 23:56:50.118741 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.119579 kubelet[3241]: E0123 23:56:50.119400 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.119579 kubelet[3241]: W0123 23:56:50.119423 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.119579 kubelet[3241]: E0123 23:56:50.119445 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.120059 kubelet[3241]: E0123 23:56:50.120039 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.120201 kubelet[3241]: W0123 23:56:50.120148 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.120201 kubelet[3241]: E0123 23:56:50.120178 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.121245 kubelet[3241]: E0123 23:56:50.121023 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.121245 kubelet[3241]: W0123 23:56:50.121048 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.121245 kubelet[3241]: E0123 23:56:50.121073 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.121854 kubelet[3241]: E0123 23:56:50.121775 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.121854 kubelet[3241]: W0123 23:56:50.121798 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.121854 kubelet[3241]: E0123 23:56:50.121822 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.122707 kubelet[3241]: E0123 23:56:50.122528 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.122707 kubelet[3241]: W0123 23:56:50.122552 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.122707 kubelet[3241]: E0123 23:56:50.122575 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.123200 kubelet[3241]: E0123 23:56:50.123121 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.123200 kubelet[3241]: W0123 23:56:50.123143 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.123200 kubelet[3241]: E0123 23:56:50.123170 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.124158 kubelet[3241]: E0123 23:56:50.124133 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.124356 kubelet[3241]: W0123 23:56:50.124252 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.124356 kubelet[3241]: E0123 23:56:50.124289 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.124855 kubelet[3241]: E0123 23:56:50.124826 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.124947 kubelet[3241]: W0123 23:56:50.124855 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.124947 kubelet[3241]: E0123 23:56:50.124879 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.125225 kubelet[3241]: E0123 23:56:50.125200 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.125301 kubelet[3241]: W0123 23:56:50.125225 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.125301 kubelet[3241]: E0123 23:56:50.125247 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.125679 kubelet[3241]: E0123 23:56:50.125651 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.125755 kubelet[3241]: W0123 23:56:50.125680 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.125755 kubelet[3241]: E0123 23:56:50.125705 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.126045 kubelet[3241]: E0123 23:56:50.126019 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.126127 kubelet[3241]: W0123 23:56:50.126044 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.126127 kubelet[3241]: E0123 23:56:50.126067 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.126573 kubelet[3241]: E0123 23:56:50.126544 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.126680 kubelet[3241]: W0123 23:56:50.126572 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.126680 kubelet[3241]: E0123 23:56:50.126597 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.126994 kubelet[3241]: E0123 23:56:50.126967 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.127063 kubelet[3241]: W0123 23:56:50.126994 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.127063 kubelet[3241]: E0123 23:56:50.127017 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.378351 kubelet[3241]: E0123 23:56:50.376544 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.378351 kubelet[3241]: W0123 23:56:50.376584 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.378351 kubelet[3241]: E0123 23:56:50.376617 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.670514 kubelet[3241]: E0123 23:56:50.669843 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.670514 kubelet[3241]: W0123 23:56:50.669875 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.670514 kubelet[3241]: E0123 23:56:50.669906 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.686723 kubelet[3241]: E0123 23:56:50.685492 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.686723 kubelet[3241]: W0123 23:56:50.685551 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.686723 kubelet[3241]: E0123 23:56:50.685588 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.686723 kubelet[3241]: E0123 23:56:50.686620 3241 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:56:50.688806 kubelet[3241]: E0123 23:56:50.688754 3241 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/92561615-cfd3-4463-b7b4-49dbc7fc9586-tigera-ca-bundle podName:92561615-cfd3-4463-b7b4-49dbc7fc9586 nodeName:}" failed. No retries permitted until 2026-01-23 23:56:51.188513112 +0000 UTC m=+35.651028315 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/92561615-cfd3-4463-b7b4-49dbc7fc9586-tigera-ca-bundle") pod "calico-typha-7954d87474-tpkmw" (UID: "92561615-cfd3-4463-b7b4-49dbc7fc9586") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:56:50.699042 kubelet[3241]: E0123 23:56:50.698984 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.699295 kubelet[3241]: W0123 23:56:50.699116 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.699295 kubelet[3241]: E0123 23:56:50.699151 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.724003 kubelet[3241]: E0123 23:56:50.723879 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.724003 kubelet[3241]: W0123 23:56:50.723908 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.724003 kubelet[3241]: E0123 23:56:50.723937 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.825229 kubelet[3241]: E0123 23:56:50.825187 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.825229 kubelet[3241]: W0123 23:56:50.825221 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.825452 kubelet[3241]: E0123 23:56:50.825351 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.889421 kubelet[3241]: E0123 23:56:50.889363 3241 configmap.go:193] Couldn't get configMap calico-system/tigera-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:56:50.889597 kubelet[3241]: E0123 23:56:50.889475 3241 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/555eeb80-0812-4d4a-8e72-46b99352da08-tigera-ca-bundle podName:555eeb80-0812-4d4a-8e72-46b99352da08 nodeName:}" failed. No retries permitted until 2026-01-23 23:56:51.389447137 +0000 UTC m=+35.851962340 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tigera-ca-bundle" (UniqueName: "kubernetes.io/configmap/555eeb80-0812-4d4a-8e72-46b99352da08-tigera-ca-bundle") pod "calico-node-kp558" (UID: "555eeb80-0812-4d4a-8e72-46b99352da08") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:56:50.926445 kubelet[3241]: E0123 23:56:50.926281 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.926445 kubelet[3241]: W0123 23:56:50.926334 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.926445 kubelet[3241]: E0123 23:56:50.926386 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:50.927632 kubelet[3241]: E0123 23:56:50.927586 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:50.927632 kubelet[3241]: W0123 23:56:50.927619 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:50.927789 kubelet[3241]: E0123 23:56:50.927646 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.030761 kubelet[3241]: E0123 23:56:51.030723 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.031564 kubelet[3241]: W0123 23:56:51.031391 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.031564 kubelet[3241]: E0123 23:56:51.031474 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.032282 kubelet[3241]: E0123 23:56:51.032229 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.032420 kubelet[3241]: W0123 23:56:51.032267 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.032420 kubelet[3241]: E0123 23:56:51.032350 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.133298 kubelet[3241]: E0123 23:56:51.133063 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.133298 kubelet[3241]: W0123 23:56:51.133093 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.133298 kubelet[3241]: E0123 23:56:51.133122 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.133936 kubelet[3241]: E0123 23:56:51.133807 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.133936 kubelet[3241]: W0123 23:56:51.133832 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.133936 kubelet[3241]: E0123 23:56:51.133859 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.235977 kubelet[3241]: E0123 23:56:51.235191 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.235977 kubelet[3241]: W0123 23:56:51.235227 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.235977 kubelet[3241]: E0123 23:56:51.235260 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.235977 kubelet[3241]: E0123 23:56:51.235688 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.235977 kubelet[3241]: W0123 23:56:51.235706 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.235977 kubelet[3241]: E0123 23:56:51.235728 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.238115 kubelet[3241]: E0123 23:56:51.237925 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.238115 kubelet[3241]: W0123 23:56:51.237962 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.238115 kubelet[3241]: E0123 23:56:51.237992 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.239046 kubelet[3241]: E0123 23:56:51.239010 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.239046 kubelet[3241]: W0123 23:56:51.239044 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.239253 kubelet[3241]: E0123 23:56:51.239073 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.239905 kubelet[3241]: E0123 23:56:51.239872 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.239905 kubelet[3241]: W0123 23:56:51.239904 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.240085 kubelet[3241]: E0123 23:56:51.239930 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.240389 kubelet[3241]: E0123 23:56:51.240337 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.240483 kubelet[3241]: W0123 23:56:51.240387 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.240483 kubelet[3241]: E0123 23:56:51.240411 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.242233 kubelet[3241]: E0123 23:56:51.242195 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.242233 kubelet[3241]: W0123 23:56:51.242229 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.242474 kubelet[3241]: E0123 23:56:51.242259 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.310236 containerd[2024]: time="2026-01-23T23:56:51.310132651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7954d87474-tpkmw,Uid:92561615-cfd3-4463-b7b4-49dbc7fc9586,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:51.338124 kubelet[3241]: E0123 23:56:51.337802 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.338124 kubelet[3241]: W0123 23:56:51.337852 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.338124 kubelet[3241]: E0123 23:56:51.337886 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.351593 containerd[2024]: time="2026-01-23T23:56:51.350972419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:51.351593 containerd[2024]: time="2026-01-23T23:56:51.351071011Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:51.351593 containerd[2024]: time="2026-01-23T23:56:51.351107047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.351593 containerd[2024]: time="2026-01-23T23:56:51.351266059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.395651 systemd[1]: Started cri-containerd-e08a9f24ad792cf24b8ea6d905e183b1531fe2b29d533646accfe1df80c837c2.scope - libcontainer container e08a9f24ad792cf24b8ea6d905e183b1531fe2b29d533646accfe1df80c837c2. Jan 23 23:56:51.439799 kubelet[3241]: E0123 23:56:51.439546 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.439799 kubelet[3241]: W0123 23:56:51.439589 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.439799 kubelet[3241]: E0123 23:56:51.439634 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.440953 kubelet[3241]: E0123 23:56:51.440910 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.440953 kubelet[3241]: W0123 23:56:51.440946 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.440953 kubelet[3241]: E0123 23:56:51.440979 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.443125 kubelet[3241]: E0123 23:56:51.443037 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.443125 kubelet[3241]: W0123 23:56:51.443072 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.443125 kubelet[3241]: E0123 23:56:51.443104 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.445723 kubelet[3241]: E0123 23:56:51.445490 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.445723 kubelet[3241]: W0123 23:56:51.445527 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.445723 kubelet[3241]: E0123 23:56:51.445560 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.448862 kubelet[3241]: E0123 23:56:51.448594 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.448862 kubelet[3241]: W0123 23:56:51.448630 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.448862 kubelet[3241]: E0123 23:56:51.448689 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.452658 kubelet[3241]: E0123 23:56:51.452598 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:51.452658 kubelet[3241]: W0123 23:56:51.452660 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:51.452861 kubelet[3241]: E0123 23:56:51.452694 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:51.471157 containerd[2024]: time="2026-01-23T23:56:51.471071948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7954d87474-tpkmw,Uid:92561615-cfd3-4463-b7b4-49dbc7fc9586,Namespace:calico-system,Attempt:0,} returns sandbox id \"e08a9f24ad792cf24b8ea6d905e183b1531fe2b29d533646accfe1df80c837c2\"" Jan 23 23:56:51.474870 containerd[2024]: time="2026-01-23T23:56:51.474800732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:56:51.583995 containerd[2024]: time="2026-01-23T23:56:51.583388649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kp558,Uid:555eeb80-0812-4d4a-8e72-46b99352da08,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:51.625744 containerd[2024]: time="2026-01-23T23:56:51.625169229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:51.625744 containerd[2024]: time="2026-01-23T23:56:51.625281357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:51.625744 containerd[2024]: time="2026-01-23T23:56:51.625363497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.626277 containerd[2024]: time="2026-01-23T23:56:51.625890297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.655663 systemd[1]: Started cri-containerd-f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79.scope - libcontainer container f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79. Jan 23 23:56:51.700609 containerd[2024]: time="2026-01-23T23:56:51.700484265Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-kp558,Uid:555eeb80-0812-4d4a-8e72-46b99352da08,Namespace:calico-system,Attempt:0,} returns sandbox id \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\"" Jan 23 23:56:51.827448 kubelet[3241]: E0123 23:56:51.826690 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:56:52.658571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2119985624.mount: Deactivated successfully. Jan 23 23:56:53.413974 containerd[2024]: time="2026-01-23T23:56:53.413918434Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:53.415926 containerd[2024]: time="2026-01-23T23:56:53.415871278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:56:53.416268 containerd[2024]: time="2026-01-23T23:56:53.416071150Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:53.419800 containerd[2024]: time="2026-01-23T23:56:53.419735962Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:53.421543 containerd[2024]: time="2026-01-23T23:56:53.421414438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.946551306s" Jan 23 23:56:53.421543 containerd[2024]: time="2026-01-23T23:56:53.421472062Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:56:53.427367 containerd[2024]: time="2026-01-23T23:56:53.426379558Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:56:53.457753 containerd[2024]: time="2026-01-23T23:56:53.457528762Z" level=info msg="CreateContainer within sandbox \"e08a9f24ad792cf24b8ea6d905e183b1531fe2b29d533646accfe1df80c837c2\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:56:53.482202 containerd[2024]: time="2026-01-23T23:56:53.482125114Z" level=info msg="CreateContainer within sandbox \"e08a9f24ad792cf24b8ea6d905e183b1531fe2b29d533646accfe1df80c837c2\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f0b5b6df0e2e174c39eaefee82e25d8f99a2444cd4fa70d2728a86dd9babdc98\"" Jan 23 23:56:53.483722 containerd[2024]: time="2026-01-23T23:56:53.483654214Z" level=info msg="StartContainer for \"f0b5b6df0e2e174c39eaefee82e25d8f99a2444cd4fa70d2728a86dd9babdc98\"" Jan 23 23:56:53.542980 systemd[1]: Started cri-containerd-f0b5b6df0e2e174c39eaefee82e25d8f99a2444cd4fa70d2728a86dd9babdc98.scope - libcontainer container f0b5b6df0e2e174c39eaefee82e25d8f99a2444cd4fa70d2728a86dd9babdc98. Jan 23 23:56:53.622044 containerd[2024]: time="2026-01-23T23:56:53.621195035Z" level=info msg="StartContainer for \"f0b5b6df0e2e174c39eaefee82e25d8f99a2444cd4fa70d2728a86dd9babdc98\" returns successfully" Jan 23 23:56:53.829176 kubelet[3241]: E0123 23:56:53.828982 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:56:54.026834 kubelet[3241]: E0123 23:56:54.026779 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.026834 kubelet[3241]: W0123 23:56:54.026820 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.027015 kubelet[3241]: E0123 23:56:54.026855 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.027625 kubelet[3241]: E0123 23:56:54.027578 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.027730 kubelet[3241]: W0123 23:56:54.027612 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.027730 kubelet[3241]: E0123 23:56:54.027685 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.028429 kubelet[3241]: E0123 23:56:54.028369 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.028429 kubelet[3241]: W0123 23:56:54.028424 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.028599 kubelet[3241]: E0123 23:56:54.028454 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.029691 kubelet[3241]: E0123 23:56:54.029332 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.029691 kubelet[3241]: W0123 23:56:54.029684 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.029980 kubelet[3241]: E0123 23:56:54.029751 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.031182 kubelet[3241]: E0123 23:56:54.031108 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.031182 kubelet[3241]: W0123 23:56:54.031170 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.031898 kubelet[3241]: E0123 23:56:54.031205 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.032437 kubelet[3241]: E0123 23:56:54.032325 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.032437 kubelet[3241]: W0123 23:56:54.032395 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.033375 kubelet[3241]: E0123 23:56:54.032428 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.034745 kubelet[3241]: E0123 23:56:54.034663 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.034745 kubelet[3241]: W0123 23:56:54.034729 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.034925 kubelet[3241]: E0123 23:56:54.034764 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.035259 kubelet[3241]: E0123 23:56:54.035217 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.035259 kubelet[3241]: W0123 23:56:54.035250 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.035424 kubelet[3241]: E0123 23:56:54.035277 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.036842 kubelet[3241]: E0123 23:56:54.036783 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.036842 kubelet[3241]: W0123 23:56:54.036824 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.037070 kubelet[3241]: E0123 23:56:54.036859 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.037388 kubelet[3241]: E0123 23:56:54.037348 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.037388 kubelet[3241]: W0123 23:56:54.037380 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.037527 kubelet[3241]: E0123 23:56:54.037406 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.038421 kubelet[3241]: E0123 23:56:54.038356 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.038421 kubelet[3241]: W0123 23:56:54.038410 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.038807 kubelet[3241]: E0123 23:56:54.038445 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.039158 kubelet[3241]: E0123 23:56:54.039008 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.039158 kubelet[3241]: W0123 23:56:54.039040 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.039158 kubelet[3241]: E0123 23:56:54.039067 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.039973 kubelet[3241]: E0123 23:56:54.039932 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.039973 kubelet[3241]: W0123 23:56:54.039969 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.040256 kubelet[3241]: E0123 23:56:54.040000 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.041736 kubelet[3241]: E0123 23:56:54.041671 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.041736 kubelet[3241]: W0123 23:56:54.041724 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.042439 kubelet[3241]: E0123 23:56:54.041759 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.042838 kubelet[3241]: E0123 23:56:54.042796 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.042838 kubelet[3241]: W0123 23:56:54.042832 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.042980 kubelet[3241]: E0123 23:56:54.042865 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.062393 kubelet[3241]: E0123 23:56:54.062169 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.062393 kubelet[3241]: W0123 23:56:54.062210 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.062393 kubelet[3241]: E0123 23:56:54.062245 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.062987 kubelet[3241]: E0123 23:56:54.062915 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.062987 kubelet[3241]: W0123 23:56:54.062973 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.063140 kubelet[3241]: E0123 23:56:54.063004 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.063602 kubelet[3241]: E0123 23:56:54.063546 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.063602 kubelet[3241]: W0123 23:56:54.063578 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.063788 kubelet[3241]: E0123 23:56:54.063606 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.064092 kubelet[3241]: E0123 23:56:54.064055 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.064092 kubelet[3241]: W0123 23:56:54.064084 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.064230 kubelet[3241]: E0123 23:56:54.064108 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.064835 kubelet[3241]: E0123 23:56:54.064794 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.064835 kubelet[3241]: W0123 23:56:54.064827 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.065113 kubelet[3241]: E0123 23:56:54.064854 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.065841 kubelet[3241]: E0123 23:56:54.065796 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.065841 kubelet[3241]: W0123 23:56:54.065833 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.066001 kubelet[3241]: E0123 23:56:54.065865 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.067666 kubelet[3241]: E0123 23:56:54.067612 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.067666 kubelet[3241]: W0123 23:56:54.067652 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.067887 kubelet[3241]: E0123 23:56:54.067704 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.068762 kubelet[3241]: E0123 23:56:54.068703 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.068762 kubelet[3241]: W0123 23:56:54.068744 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.068952 kubelet[3241]: E0123 23:56:54.068776 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.069408 kubelet[3241]: E0123 23:56:54.069368 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.069408 kubelet[3241]: W0123 23:56:54.069401 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.069595 kubelet[3241]: E0123 23:56:54.069429 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.071085 kubelet[3241]: E0123 23:56:54.071034 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.071085 kubelet[3241]: W0123 23:56:54.071073 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.071275 kubelet[3241]: E0123 23:56:54.071108 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.072772 kubelet[3241]: E0123 23:56:54.072714 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.072772 kubelet[3241]: W0123 23:56:54.072752 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.073011 kubelet[3241]: E0123 23:56:54.072787 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.073429 kubelet[3241]: E0123 23:56:54.073388 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.073429 kubelet[3241]: W0123 23:56:54.073422 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.073563 kubelet[3241]: E0123 23:56:54.073451 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.074743 kubelet[3241]: E0123 23:56:54.074693 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.074743 kubelet[3241]: W0123 23:56:54.074730 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.074975 kubelet[3241]: E0123 23:56:54.074763 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.075154 kubelet[3241]: E0123 23:56:54.075121 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.075154 kubelet[3241]: W0123 23:56:54.075148 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.075264 kubelet[3241]: E0123 23:56:54.075174 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.078722 kubelet[3241]: E0123 23:56:54.078670 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.078722 kubelet[3241]: W0123 23:56:54.078713 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.078959 kubelet[3241]: E0123 23:56:54.078748 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.082726 kubelet[3241]: E0123 23:56:54.082456 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.082726 kubelet[3241]: W0123 23:56:54.082525 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.082726 kubelet[3241]: E0123 23:56:54.082563 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.084170 kubelet[3241]: E0123 23:56:54.083945 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.084170 kubelet[3241]: W0123 23:56:54.083976 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.084170 kubelet[3241]: E0123 23:56:54.084007 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.088475 kubelet[3241]: E0123 23:56:54.088422 3241 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:54.088475 kubelet[3241]: W0123 23:56:54.088462 3241 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:54.088663 kubelet[3241]: E0123 23:56:54.088497 3241 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:54.670358 containerd[2024]: time="2026-01-23T23:56:54.670260156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:54.673926 containerd[2024]: time="2026-01-23T23:56:54.673851240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:56:54.675124 containerd[2024]: time="2026-01-23T23:56:54.675059820Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:54.681790 containerd[2024]: time="2026-01-23T23:56:54.681712248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:54.684580 containerd[2024]: time="2026-01-23T23:56:54.684507168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.258058646s" Jan 23 23:56:54.684580 containerd[2024]: time="2026-01-23T23:56:54.684573240Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:56:54.690997 containerd[2024]: time="2026-01-23T23:56:54.690922740Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:56:54.722407 containerd[2024]: time="2026-01-23T23:56:54.722272452Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6\"" Jan 23 23:56:54.725164 containerd[2024]: time="2026-01-23T23:56:54.723455088Z" level=info msg="StartContainer for \"dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6\"" Jan 23 23:56:54.787646 systemd[1]: Started cri-containerd-dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6.scope - libcontainer container dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6. Jan 23 23:56:54.851497 containerd[2024]: time="2026-01-23T23:56:54.851431357Z" level=info msg="StartContainer for \"dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6\" returns successfully" Jan 23 23:56:54.887363 systemd[1]: cri-containerd-dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6.scope: Deactivated successfully. Jan 23 23:56:55.049839 kubelet[3241]: I0123 23:56:55.049578 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7954d87474-tpkmw" podStartSLOduration=4.099775976 podStartE2EDuration="6.049554442s" podCreationTimestamp="2026-01-23 23:56:49 +0000 UTC" firstStartedPulling="2026-01-23 23:56:51.473513276 +0000 UTC m=+35.936028479" lastFinishedPulling="2026-01-23 23:56:53.423291694 +0000 UTC m=+37.885806945" observedRunningTime="2026-01-23 23:56:54.092644389 +0000 UTC m=+38.555159604" watchObservedRunningTime="2026-01-23 23:56:55.049554442 +0000 UTC m=+39.512069645" Jan 23 23:56:55.099743 containerd[2024]: time="2026-01-23T23:56:55.099609622Z" level=info msg="shim disconnected" id=dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6 namespace=k8s.io Jan 23 23:56:55.099933 containerd[2024]: time="2026-01-23T23:56:55.099726442Z" level=warning msg="cleaning up after shim disconnected" id=dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6 namespace=k8s.io Jan 23 23:56:55.099933 containerd[2024]: time="2026-01-23T23:56:55.099889150Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:55.434591 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcc3822cb625dbc3d31447e83032ddaaa0571e17c0b1ad49724a81af18a360c6-rootfs.mount: Deactivated successfully. Jan 23 23:56:55.828188 kubelet[3241]: E0123 23:56:55.827537 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:56:56.023923 containerd[2024]: time="2026-01-23T23:56:56.023837555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:56:57.830860 kubelet[3241]: E0123 23:56:57.830673 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:56:58.874251 containerd[2024]: time="2026-01-23T23:56:58.873859541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:58.876984 containerd[2024]: time="2026-01-23T23:56:58.876929957Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:56:58.878629 containerd[2024]: time="2026-01-23T23:56:58.878489717Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:58.884966 containerd[2024]: time="2026-01-23T23:56:58.883231241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:58.884966 containerd[2024]: time="2026-01-23T23:56:58.884667221Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.86075769s" Jan 23 23:56:58.884966 containerd[2024]: time="2026-01-23T23:56:58.884712029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:56:58.891647 containerd[2024]: time="2026-01-23T23:56:58.891581021Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:56:58.925727 containerd[2024]: time="2026-01-23T23:56:58.925513313Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c\"" Jan 23 23:56:58.928136 containerd[2024]: time="2026-01-23T23:56:58.927125669Z" level=info msg="StartContainer for \"a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c\"" Jan 23 23:56:58.995662 systemd[1]: Started cri-containerd-a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c.scope - libcontainer container a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c. Jan 23 23:56:59.059105 containerd[2024]: time="2026-01-23T23:56:59.058690994Z" level=info msg="StartContainer for \"a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c\" returns successfully" Jan 23 23:56:59.828984 kubelet[3241]: E0123 23:56:59.828424 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:00.115946 containerd[2024]: time="2026-01-23T23:57:00.114796743Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:57:00.123170 systemd[1]: cri-containerd-a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c.scope: Deactivated successfully. Jan 23 23:57:00.124236 systemd[1]: cri-containerd-a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c.scope: Consumed 1.000s CPU time. Jan 23 23:57:00.175129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c-rootfs.mount: Deactivated successfully. Jan 23 23:57:00.186978 kubelet[3241]: I0123 23:57:00.186924 3241 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:57:00.292621 systemd[1]: Created slice kubepods-burstable-pod9f58d96e_4844_4626_a760_be9823990f64.slice - libcontainer container kubepods-burstable-pod9f58d96e_4844_4626_a760_be9823990f64.slice. Jan 23 23:57:00.319623 kubelet[3241]: I0123 23:57:00.319541 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91093aed-82d4-44e1-9d6b-e10aeaeca718-config-volume\") pod \"coredns-674b8bbfcf-d4bjn\" (UID: \"91093aed-82d4-44e1-9d6b-e10aeaeca718\") " pod="kube-system/coredns-674b8bbfcf-d4bjn" Jan 23 23:57:00.319623 kubelet[3241]: I0123 23:57:00.319623 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9f58d96e-4844-4626-a760-be9823990f64-config-volume\") pod \"coredns-674b8bbfcf-6s8v5\" (UID: \"9f58d96e-4844-4626-a760-be9823990f64\") " pod="kube-system/coredns-674b8bbfcf-6s8v5" Jan 23 23:57:00.319904 kubelet[3241]: I0123 23:57:00.319671 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr8nh\" (UniqueName: \"kubernetes.io/projected/91093aed-82d4-44e1-9d6b-e10aeaeca718-kube-api-access-wr8nh\") pod \"coredns-674b8bbfcf-d4bjn\" (UID: \"91093aed-82d4-44e1-9d6b-e10aeaeca718\") " pod="kube-system/coredns-674b8bbfcf-d4bjn" Jan 23 23:57:00.319904 kubelet[3241]: I0123 23:57:00.319712 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/74258f81-20b6-4c16-8e17-d994c72b6c19-calico-apiserver-certs\") pod \"calico-apiserver-6976454ff7-t76qf\" (UID: \"74258f81-20b6-4c16-8e17-d994c72b6c19\") " pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" Jan 23 23:57:00.319904 kubelet[3241]: I0123 23:57:00.319759 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6td6\" (UniqueName: \"kubernetes.io/projected/74258f81-20b6-4c16-8e17-d994c72b6c19-kube-api-access-n6td6\") pod \"calico-apiserver-6976454ff7-t76qf\" (UID: \"74258f81-20b6-4c16-8e17-d994c72b6c19\") " pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" Jan 23 23:57:00.319904 kubelet[3241]: I0123 23:57:00.319800 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4k5zm\" (UniqueName: \"kubernetes.io/projected/9f58d96e-4844-4626-a760-be9823990f64-kube-api-access-4k5zm\") pod \"coredns-674b8bbfcf-6s8v5\" (UID: \"9f58d96e-4844-4626-a760-be9823990f64\") " pod="kube-system/coredns-674b8bbfcf-6s8v5" Jan 23 23:57:00.324845 systemd[1]: Created slice kubepods-besteffort-pod74258f81_20b6_4c16_8e17_d994c72b6c19.slice - libcontainer container kubepods-besteffort-pod74258f81_20b6_4c16_8e17_d994c72b6c19.slice. Jan 23 23:57:00.353176 systemd[1]: Created slice kubepods-burstable-pod91093aed_82d4_44e1_9d6b_e10aeaeca718.slice - libcontainer container kubepods-burstable-pod91093aed_82d4_44e1_9d6b_e10aeaeca718.slice. Jan 23 23:57:00.420815 kubelet[3241]: I0123 23:57:00.420740 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/64d067e9-db06-43a4-8ec2-5418bd9de44b-tigera-ca-bundle\") pod \"calico-kube-controllers-74cfd6877d-hr9jw\" (UID: \"64d067e9-db06-43a4-8ec2-5418bd9de44b\") " pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" Jan 23 23:57:00.428435 kubelet[3241]: I0123 23:57:00.420824 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxpxk\" (UniqueName: \"kubernetes.io/projected/64d067e9-db06-43a4-8ec2-5418bd9de44b-kube-api-access-rxpxk\") pod \"calico-kube-controllers-74cfd6877d-hr9jw\" (UID: \"64d067e9-db06-43a4-8ec2-5418bd9de44b\") " pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" Jan 23 23:57:00.509079 systemd[1]: Created slice kubepods-besteffort-pod40024e0b_dc12_464a_9bd9_6f315f803fe4.slice - libcontainer container kubepods-besteffort-pod40024e0b_dc12_464a_9bd9_6f315f803fe4.slice. Jan 23 23:57:00.521696 kubelet[3241]: I0123 23:57:00.521632 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/40024e0b-dc12-464a-9bd9-6f315f803fe4-goldmane-key-pair\") pod \"goldmane-666569f655-q6gtf\" (UID: \"40024e0b-dc12-464a-9bd9-6f315f803fe4\") " pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:00.523508 kubelet[3241]: I0123 23:57:00.522515 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/40024e0b-dc12-464a-9bd9-6f315f803fe4-goldmane-ca-bundle\") pod \"goldmane-666569f655-q6gtf\" (UID: \"40024e0b-dc12-464a-9bd9-6f315f803fe4\") " pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:00.523508 kubelet[3241]: I0123 23:57:00.522566 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47fj5\" (UniqueName: \"kubernetes.io/projected/40024e0b-dc12-464a-9bd9-6f315f803fe4-kube-api-access-47fj5\") pod \"goldmane-666569f655-q6gtf\" (UID: \"40024e0b-dc12-464a-9bd9-6f315f803fe4\") " pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:00.523508 kubelet[3241]: I0123 23:57:00.522612 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/40024e0b-dc12-464a-9bd9-6f315f803fe4-config\") pod \"goldmane-666569f655-q6gtf\" (UID: \"40024e0b-dc12-464a-9bd9-6f315f803fe4\") " pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:00.560072 systemd[1]: Created slice kubepods-besteffort-pod82695ea2_4281_4b27_853c_a668e2f1fb61.slice - libcontainer container kubepods-besteffort-pod82695ea2_4281_4b27_853c_a668e2f1fb61.slice. Jan 23 23:57:00.605540 containerd[2024]: time="2026-01-23T23:57:00.604280813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6s8v5,Uid:9f58d96e-4844-4626-a760-be9823990f64,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:00.612818 systemd[1]: Created slice kubepods-besteffort-pod7d79c384_4d50_4538_9d9a_312b65c47eb8.slice - libcontainer container kubepods-besteffort-pod7d79c384_4d50_4538_9d9a_312b65c47eb8.slice. Jan 23 23:57:00.624289 kubelet[3241]: I0123 23:57:00.624211 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-backend-key-pair\") pod \"whisker-84494b5d4d-lj25m\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " pod="calico-system/whisker-84494b5d4d-lj25m" Jan 23 23:57:00.628731 kubelet[3241]: I0123 23:57:00.625980 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-ca-bundle\") pod \"whisker-84494b5d4d-lj25m\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " pod="calico-system/whisker-84494b5d4d-lj25m" Jan 23 23:57:00.628731 kubelet[3241]: I0123 23:57:00.626035 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7d79c384-4d50-4538-9d9a-312b65c47eb8-calico-apiserver-certs\") pod \"calico-apiserver-6976454ff7-ddg9z\" (UID: \"7d79c384-4d50-4538-9d9a-312b65c47eb8\") " pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" Jan 23 23:57:00.628731 kubelet[3241]: I0123 23:57:00.626129 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czjb2\" (UniqueName: \"kubernetes.io/projected/82695ea2-4281-4b27-853c-a668e2f1fb61-kube-api-access-czjb2\") pod \"whisker-84494b5d4d-lj25m\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " pod="calico-system/whisker-84494b5d4d-lj25m" Jan 23 23:57:00.628731 kubelet[3241]: I0123 23:57:00.626181 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fbcb\" (UniqueName: \"kubernetes.io/projected/7d79c384-4d50-4538-9d9a-312b65c47eb8-kube-api-access-2fbcb\") pod \"calico-apiserver-6976454ff7-ddg9z\" (UID: \"7d79c384-4d50-4538-9d9a-312b65c47eb8\") " pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" Jan 23 23:57:00.639128 systemd[1]: Created slice kubepods-besteffort-pod64d067e9_db06_43a4_8ec2_5418bd9de44b.slice - libcontainer container kubepods-besteffort-pod64d067e9_db06_43a4_8ec2_5418bd9de44b.slice. Jan 23 23:57:00.641937 containerd[2024]: time="2026-01-23T23:57:00.640908162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-t76qf,Uid:74258f81-20b6-4c16-8e17-d994c72b6c19,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:57:00.648863 containerd[2024]: time="2026-01-23T23:57:00.648654570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74cfd6877d-hr9jw,Uid:64d067e9-db06-43a4-8ec2-5418bd9de44b,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:00.668493 containerd[2024]: time="2026-01-23T23:57:00.668415882Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4bjn,Uid:91093aed-82d4-44e1-9d6b-e10aeaeca718,Namespace:kube-system,Attempt:0,}" Jan 23 23:57:00.711045 containerd[2024]: time="2026-01-23T23:57:00.709574502Z" level=info msg="shim disconnected" id=a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c namespace=k8s.io Jan 23 23:57:00.711045 containerd[2024]: time="2026-01-23T23:57:00.709652274Z" level=warning msg="cleaning up after shim disconnected" id=a361092a06e8a039d46ace3b39c0a7473b8badbfecf1c60d5edb4a657128ee2c namespace=k8s.io Jan 23 23:57:00.711045 containerd[2024]: time="2026-01-23T23:57:00.709710882Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:00.823343 containerd[2024]: time="2026-01-23T23:57:00.823071810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q6gtf,Uid:40024e0b-dc12-464a-9bd9-6f315f803fe4,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:00.887076 containerd[2024]: time="2026-01-23T23:57:00.886541227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84494b5d4d-lj25m,Uid:82695ea2-4281-4b27-853c-a668e2f1fb61,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:00.928113 containerd[2024]: time="2026-01-23T23:57:00.927784351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-ddg9z,Uid:7d79c384-4d50-4538-9d9a-312b65c47eb8,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:57:01.069932 containerd[2024]: time="2026-01-23T23:57:01.069772456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:57:01.164348 containerd[2024]: time="2026-01-23T23:57:01.162955948Z" level=error msg="Failed to destroy network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.165648 containerd[2024]: time="2026-01-23T23:57:01.165565792Z" level=error msg="Failed to destroy network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.175708 containerd[2024]: time="2026-01-23T23:57:01.175587244Z" level=error msg="encountered an error cleaning up failed sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.175865 containerd[2024]: time="2026-01-23T23:57:01.175745332Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6s8v5,Uid:9f58d96e-4844-4626-a760-be9823990f64,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.195034 containerd[2024]: time="2026-01-23T23:57:01.193935640Z" level=error msg="encountered an error cleaning up failed sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.195034 containerd[2024]: time="2026-01-23T23:57:01.194078428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-t76qf,Uid:74258f81-20b6-4c16-8e17-d994c72b6c19,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.197343 kubelet[3241]: E0123 23:57:01.196392 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.197343 kubelet[3241]: E0123 23:57:01.196503 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" Jan 23 23:57:01.197343 kubelet[3241]: E0123 23:57:01.196541 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" Jan 23 23:57:01.200807 kubelet[3241]: E0123 23:57:01.196612 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:01.200807 kubelet[3241]: E0123 23:57:01.196991 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.200807 kubelet[3241]: E0123 23:57:01.197035 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6s8v5" Jan 23 23:57:01.201062 kubelet[3241]: E0123 23:57:01.197068 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6s8v5" Jan 23 23:57:01.201062 kubelet[3241]: E0123 23:57:01.197127 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6s8v5_kube-system(9f58d96e-4844-4626-a760-be9823990f64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6s8v5_kube-system(9f58d96e-4844-4626-a760-be9823990f64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6s8v5" podUID="9f58d96e-4844-4626-a760-be9823990f64" Jan 23 23:57:01.260449 containerd[2024]: time="2026-01-23T23:57:01.260146877Z" level=error msg="Failed to destroy network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.266494 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6-shm.mount: Deactivated successfully. Jan 23 23:57:01.269455 containerd[2024]: time="2026-01-23T23:57:01.269195741Z" level=error msg="encountered an error cleaning up failed sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.270473 containerd[2024]: time="2026-01-23T23:57:01.270381089Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4bjn,Uid:91093aed-82d4-44e1-9d6b-e10aeaeca718,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.271251 kubelet[3241]: E0123 23:57:01.270980 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.271251 kubelet[3241]: E0123 23:57:01.271061 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d4bjn" Jan 23 23:57:01.271251 kubelet[3241]: E0123 23:57:01.271102 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-d4bjn" Jan 23 23:57:01.271707 kubelet[3241]: E0123 23:57:01.271188 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-d4bjn_kube-system(91093aed-82d4-44e1-9d6b-e10aeaeca718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-d4bjn_kube-system(91093aed-82d4-44e1-9d6b-e10aeaeca718)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d4bjn" podUID="91093aed-82d4-44e1-9d6b-e10aeaeca718" Jan 23 23:57:01.298265 containerd[2024]: time="2026-01-23T23:57:01.297911501Z" level=error msg="Failed to destroy network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.307341 containerd[2024]: time="2026-01-23T23:57:01.305582093Z" level=error msg="encountered an error cleaning up failed sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.306700 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f-shm.mount: Deactivated successfully. Jan 23 23:57:01.309484 containerd[2024]: time="2026-01-23T23:57:01.308705957Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74cfd6877d-hr9jw,Uid:64d067e9-db06-43a4-8ec2-5418bd9de44b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.310656 kubelet[3241]: E0123 23:57:01.310572 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.310837 kubelet[3241]: E0123 23:57:01.310677 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" Jan 23 23:57:01.310837 kubelet[3241]: E0123 23:57:01.310714 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" Jan 23 23:57:01.310837 kubelet[3241]: E0123 23:57:01.310784 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:01.316561 containerd[2024]: time="2026-01-23T23:57:01.316500089Z" level=error msg="Failed to destroy network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.321536 containerd[2024]: time="2026-01-23T23:57:01.320233649Z" level=error msg="encountered an error cleaning up failed sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.324903 containerd[2024]: time="2026-01-23T23:57:01.322493969Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q6gtf,Uid:40024e0b-dc12-464a-9bd9-6f315f803fe4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.325070 kubelet[3241]: E0123 23:57:01.324792 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.325070 kubelet[3241]: E0123 23:57:01.324951 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:01.325070 kubelet[3241]: E0123 23:57:01.325007 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-q6gtf" Jan 23 23:57:01.328161 kubelet[3241]: E0123 23:57:01.325146 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:01.325469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c-shm.mount: Deactivated successfully. Jan 23 23:57:01.365952 containerd[2024]: time="2026-01-23T23:57:01.365785181Z" level=error msg="Failed to destroy network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.372187 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1-shm.mount: Deactivated successfully. Jan 23 23:57:01.373627 containerd[2024]: time="2026-01-23T23:57:01.373244453Z" level=error msg="encountered an error cleaning up failed sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.373627 containerd[2024]: time="2026-01-23T23:57:01.373456709Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-84494b5d4d-lj25m,Uid:82695ea2-4281-4b27-853c-a668e2f1fb61,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.374826 kubelet[3241]: E0123 23:57:01.374157 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.374826 kubelet[3241]: E0123 23:57:01.374229 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84494b5d4d-lj25m" Jan 23 23:57:01.374826 kubelet[3241]: E0123 23:57:01.374263 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-84494b5d4d-lj25m" Jan 23 23:57:01.375059 kubelet[3241]: E0123 23:57:01.374392 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-84494b5d4d-lj25m_calico-system(82695ea2-4281-4b27-853c-a668e2f1fb61)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-84494b5d4d-lj25m_calico-system(82695ea2-4281-4b27-853c-a668e2f1fb61)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84494b5d4d-lj25m" podUID="82695ea2-4281-4b27-853c-a668e2f1fb61" Jan 23 23:57:01.383183 containerd[2024]: time="2026-01-23T23:57:01.382563845Z" level=error msg="Failed to destroy network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.384735 containerd[2024]: time="2026-01-23T23:57:01.384642437Z" level=error msg="encountered an error cleaning up failed sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.385132 containerd[2024]: time="2026-01-23T23:57:01.384879953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-ddg9z,Uid:7d79c384-4d50-4538-9d9a-312b65c47eb8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.385656 kubelet[3241]: E0123 23:57:01.385602 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.385855 kubelet[3241]: E0123 23:57:01.385686 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" Jan 23 23:57:01.385855 kubelet[3241]: E0123 23:57:01.385726 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" Jan 23 23:57:01.386485 kubelet[3241]: E0123 23:57:01.385835 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:01.839131 systemd[1]: Created slice kubepods-besteffort-pod46c86ab0_1223_4a22_bfcf_7f463abcf340.slice - libcontainer container kubepods-besteffort-pod46c86ab0_1223_4a22_bfcf_7f463abcf340.slice. Jan 23 23:57:01.843506 containerd[2024]: time="2026-01-23T23:57:01.843409052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rn45p,Uid:46c86ab0-1223-4a22-bfcf-7f463abcf340,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:01.943711 containerd[2024]: time="2026-01-23T23:57:01.943635572Z" level=error msg="Failed to destroy network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.944632 containerd[2024]: time="2026-01-23T23:57:01.944577260Z" level=error msg="encountered an error cleaning up failed sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.944741 containerd[2024]: time="2026-01-23T23:57:01.944668688Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rn45p,Uid:46c86ab0-1223-4a22-bfcf-7f463abcf340,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.945055 kubelet[3241]: E0123 23:57:01.944972 3241 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:01.945136 kubelet[3241]: E0123 23:57:01.945088 3241 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rn45p" Jan 23 23:57:01.945136 kubelet[3241]: E0123 23:57:01.945123 3241 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-rn45p" Jan 23 23:57:01.945296 kubelet[3241]: E0123 23:57:01.945204 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:02.065110 kubelet[3241]: I0123 23:57:02.065046 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:02.070118 containerd[2024]: time="2026-01-23T23:57:02.067275113Z" level=info msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" Jan 23 23:57:02.070118 containerd[2024]: time="2026-01-23T23:57:02.067591625Z" level=info msg="Ensure that sandbox 54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c in task-service has been cleanup successfully" Jan 23 23:57:02.070732 kubelet[3241]: I0123 23:57:02.068730 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:02.071203 containerd[2024]: time="2026-01-23T23:57:02.070868429Z" level=info msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" Jan 23 23:57:02.071714 containerd[2024]: time="2026-01-23T23:57:02.071492597Z" level=info msg="Ensure that sandbox 16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce in task-service has been cleanup successfully" Jan 23 23:57:02.079800 kubelet[3241]: I0123 23:57:02.078878 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:02.083515 containerd[2024]: time="2026-01-23T23:57:02.082602281Z" level=info msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" Jan 23 23:57:02.086007 containerd[2024]: time="2026-01-23T23:57:02.085951637Z" level=info msg="Ensure that sandbox 978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48 in task-service has been cleanup successfully" Jan 23 23:57:02.088242 kubelet[3241]: I0123 23:57:02.087755 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:02.094501 containerd[2024]: time="2026-01-23T23:57:02.093980657Z" level=info msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" Jan 23 23:57:02.097716 containerd[2024]: time="2026-01-23T23:57:02.097286633Z" level=info msg="Ensure that sandbox 931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1 in task-service has been cleanup successfully" Jan 23 23:57:02.111303 kubelet[3241]: I0123 23:57:02.111264 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:02.116168 containerd[2024]: time="2026-01-23T23:57:02.115169921Z" level=info msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" Jan 23 23:57:02.120339 containerd[2024]: time="2026-01-23T23:57:02.120035081Z" level=info msg="Ensure that sandbox cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b in task-service has been cleanup successfully" Jan 23 23:57:02.121551 kubelet[3241]: I0123 23:57:02.121204 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:02.127359 containerd[2024]: time="2026-01-23T23:57:02.127275629Z" level=info msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" Jan 23 23:57:02.127847 containerd[2024]: time="2026-01-23T23:57:02.127809473Z" level=info msg="Ensure that sandbox 4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6 in task-service has been cleanup successfully" Jan 23 23:57:02.138753 kubelet[3241]: I0123 23:57:02.136890 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:02.142949 containerd[2024]: time="2026-01-23T23:57:02.141831425Z" level=info msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" Jan 23 23:57:02.147369 containerd[2024]: time="2026-01-23T23:57:02.146767469Z" level=info msg="Ensure that sandbox e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f in task-service has been cleanup successfully" Jan 23 23:57:02.151193 kubelet[3241]: I0123 23:57:02.151041 3241 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:02.156690 containerd[2024]: time="2026-01-23T23:57:02.156610733Z" level=info msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" Jan 23 23:57:02.157276 containerd[2024]: time="2026-01-23T23:57:02.156954197Z" level=info msg="Ensure that sandbox eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487 in task-service has been cleanup successfully" Jan 23 23:57:02.179657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce-shm.mount: Deactivated successfully. Jan 23 23:57:02.342198 containerd[2024]: time="2026-01-23T23:57:02.341600058Z" level=error msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" failed" error="failed to destroy network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.342790 kubelet[3241]: E0123 23:57:02.341937 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:02.342790 kubelet[3241]: E0123 23:57:02.342014 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce"} Jan 23 23:57:02.342790 kubelet[3241]: E0123 23:57:02.342094 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7d79c384-4d50-4538-9d9a-312b65c47eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.342790 kubelet[3241]: E0123 23:57:02.342137 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7d79c384-4d50-4538-9d9a-312b65c47eb8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:02.345697 containerd[2024]: time="2026-01-23T23:57:02.343776726Z" level=error msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" failed" error="failed to destroy network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.345842 kubelet[3241]: E0123 23:57:02.344366 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:02.345842 kubelet[3241]: E0123 23:57:02.344432 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48"} Jan 23 23:57:02.345842 kubelet[3241]: E0123 23:57:02.344487 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f58d96e-4844-4626-a760-be9823990f64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.345842 kubelet[3241]: E0123 23:57:02.344533 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f58d96e-4844-4626-a760-be9823990f64\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6s8v5" podUID="9f58d96e-4844-4626-a760-be9823990f64" Jan 23 23:57:02.358651 containerd[2024]: time="2026-01-23T23:57:02.358582242Z" level=error msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" failed" error="failed to destroy network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.360470 kubelet[3241]: E0123 23:57:02.360201 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:02.360470 kubelet[3241]: E0123 23:57:02.360283 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c"} Jan 23 23:57:02.360470 kubelet[3241]: E0123 23:57:02.360359 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"40024e0b-dc12-464a-9bd9-6f315f803fe4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.360470 kubelet[3241]: E0123 23:57:02.360404 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"40024e0b-dc12-464a-9bd9-6f315f803fe4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:02.369533 containerd[2024]: time="2026-01-23T23:57:02.369279138Z" level=error msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" failed" error="failed to destroy network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.370593 containerd[2024]: time="2026-01-23T23:57:02.370483554Z" level=error msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" failed" error="failed to destroy network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.371507 kubelet[3241]: E0123 23:57:02.371160 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:02.371507 kubelet[3241]: E0123 23:57:02.371237 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b"} Jan 23 23:57:02.371507 kubelet[3241]: E0123 23:57:02.371298 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"74258f81-20b6-4c16-8e17-d994c72b6c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.371507 kubelet[3241]: E0123 23:57:02.371362 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"74258f81-20b6-4c16-8e17-d994c72b6c19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:02.371910 containerd[2024]: time="2026-01-23T23:57:02.371206386Z" level=error msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" failed" error="failed to destroy network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.373151 kubelet[3241]: E0123 23:57:02.372621 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:02.373151 kubelet[3241]: E0123 23:57:02.372689 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1"} Jan 23 23:57:02.373151 kubelet[3241]: E0123 23:57:02.372753 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82695ea2-4281-4b27-853c-a668e2f1fb61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.373151 kubelet[3241]: E0123 23:57:02.372792 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82695ea2-4281-4b27-853c-a668e2f1fb61\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-84494b5d4d-lj25m" podUID="82695ea2-4281-4b27-853c-a668e2f1fb61" Jan 23 23:57:02.373575 kubelet[3241]: E0123 23:57:02.372841 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:02.373575 kubelet[3241]: E0123 23:57:02.372873 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6"} Jan 23 23:57:02.374136 kubelet[3241]: E0123 23:57:02.372911 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"91093aed-82d4-44e1-9d6b-e10aeaeca718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.374136 kubelet[3241]: E0123 23:57:02.373805 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"91093aed-82d4-44e1-9d6b-e10aeaeca718\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-d4bjn" podUID="91093aed-82d4-44e1-9d6b-e10aeaeca718" Jan 23 23:57:02.398277 containerd[2024]: time="2026-01-23T23:57:02.397082538Z" level=error msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" failed" error="failed to destroy network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.398461 kubelet[3241]: E0123 23:57:02.397477 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:02.398461 kubelet[3241]: E0123 23:57:02.397574 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f"} Jan 23 23:57:02.398461 kubelet[3241]: E0123 23:57:02.397636 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"64d067e9-db06-43a4-8ec2-5418bd9de44b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.398461 kubelet[3241]: E0123 23:57:02.397675 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"64d067e9-db06-43a4-8ec2-5418bd9de44b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:02.399008 containerd[2024]: time="2026-01-23T23:57:02.398950770Z" level=error msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" failed" error="failed to destroy network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:57:02.399679 kubelet[3241]: E0123 23:57:02.399355 3241 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:02.399679 kubelet[3241]: E0123 23:57:02.399430 3241 kuberuntime_manager.go:1586] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487"} Jan 23 23:57:02.399679 kubelet[3241]: E0123 23:57:02.399483 3241 kuberuntime_manager.go:1161] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"46c86ab0-1223-4a22-bfcf-7f463abcf340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:57:02.399679 kubelet[3241]: E0123 23:57:02.399522 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"46c86ab0-1223-4a22-bfcf-7f463abcf340\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:07.727304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582920444.mount: Deactivated successfully. Jan 23 23:57:07.787230 containerd[2024]: time="2026-01-23T23:57:07.786822733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.788801 containerd[2024]: time="2026-01-23T23:57:07.788521657Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:57:07.790050 containerd[2024]: time="2026-01-23T23:57:07.789882277Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.794605 containerd[2024]: time="2026-01-23T23:57:07.794445157Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:57:07.796247 containerd[2024]: time="2026-01-23T23:57:07.795677821Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.725836749s" Jan 23 23:57:07.796247 containerd[2024]: time="2026-01-23T23:57:07.795740269Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:57:07.850849 containerd[2024]: time="2026-01-23T23:57:07.850789117Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:57:07.878852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1590020878.mount: Deactivated successfully. Jan 23 23:57:07.882118 containerd[2024]: time="2026-01-23T23:57:07.881940182Z" level=info msg="CreateContainer within sandbox \"f1bad532bbb37ecd5e4767847da01970b57ad5d7f2a01b0e40d4369716528c79\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04\"" Jan 23 23:57:07.884841 containerd[2024]: time="2026-01-23T23:57:07.884772986Z" level=info msg="StartContainer for \"8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04\"" Jan 23 23:57:07.939633 systemd[1]: Started cri-containerd-8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04.scope - libcontainer container 8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04. Jan 23 23:57:08.006989 containerd[2024]: time="2026-01-23T23:57:08.006638770Z" level=info msg="StartContainer for \"8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04\" returns successfully" Jan 23 23:57:08.222817 kubelet[3241]: I0123 23:57:08.222213 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-kp558" podStartSLOduration=3.126684127 podStartE2EDuration="19.222190859s" podCreationTimestamp="2026-01-23 23:56:49 +0000 UTC" firstStartedPulling="2026-01-23 23:56:51.702278769 +0000 UTC m=+36.164793972" lastFinishedPulling="2026-01-23 23:57:07.797785513 +0000 UTC m=+52.260300704" observedRunningTime="2026-01-23 23:57:08.214159271 +0000 UTC m=+52.676674474" watchObservedRunningTime="2026-01-23 23:57:08.222190859 +0000 UTC m=+52.684706074" Jan 23 23:57:08.471503 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:57:08.471674 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:57:08.688490 containerd[2024]: time="2026-01-23T23:57:08.688425386Z" level=info msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" Jan 23 23:57:09.681847 systemd[1]: Started sshd@7-172.31.28.204:22-4.153.228.146:55832.service - OpenSSH per-connection server daemon (4.153.228.146:55832). Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.912 [INFO][4753] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.943 [INFO][4753] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" iface="eth0" netns="/var/run/netns/cni-f7d24407-2eaa-0ecc-4450-f7fc111ff0a7" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.944 [INFO][4753] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" iface="eth0" netns="/var/run/netns/cni-f7d24407-2eaa-0ecc-4450-f7fc111ff0a7" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.978 [INFO][4753] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" iface="eth0" netns="/var/run/netns/cni-f7d24407-2eaa-0ecc-4450-f7fc111ff0a7" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.979 [INFO][4753] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:09.979 [INFO][4753] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.052 [INFO][4798] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.053 [INFO][4798] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.053 [INFO][4798] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.068 [WARNING][4798] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.068 [INFO][4798] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.072 [INFO][4798] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:10.082921 containerd[2024]: 2026-01-23 23:57:10.079 [INFO][4753] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:10.088103 containerd[2024]: time="2026-01-23T23:57:10.083598684Z" level=info msg="TearDown network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" successfully" Jan 23 23:57:10.088103 containerd[2024]: time="2026-01-23T23:57:10.083640144Z" level=info msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" returns successfully" Jan 23 23:57:10.090476 systemd[1]: run-netns-cni\x2df7d24407\x2d2eaa\x2d0ecc\x2d4450\x2df7fc111ff0a7.mount: Deactivated successfully. Jan 23 23:57:10.215686 kubelet[3241]: I0123 23:57:10.215606 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-ca-bundle\") pod \"82695ea2-4281-4b27-853c-a668e2f1fb61\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " Jan 23 23:57:10.215686 kubelet[3241]: I0123 23:57:10.215691 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-backend-key-pair\") pod \"82695ea2-4281-4b27-853c-a668e2f1fb61\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " Jan 23 23:57:10.218904 kubelet[3241]: I0123 23:57:10.215742 3241 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czjb2\" (UniqueName: \"kubernetes.io/projected/82695ea2-4281-4b27-853c-a668e2f1fb61-kube-api-access-czjb2\") pod \"82695ea2-4281-4b27-853c-a668e2f1fb61\" (UID: \"82695ea2-4281-4b27-853c-a668e2f1fb61\") " Jan 23 23:57:10.218904 kubelet[3241]: I0123 23:57:10.217341 3241 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "82695ea2-4281-4b27-853c-a668e2f1fb61" (UID: "82695ea2-4281-4b27-853c-a668e2f1fb61"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:57:10.228788 sshd[4794]: Accepted publickey for core from 4.153.228.146 port 55832 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:10.228203 sshd[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:10.227499 systemd[1]: run-containerd-runc-k8s.io-8a33f4062a2439044b99d3bab8f8c067b6ca9562ad7701fc568dbe40f55ebf04-runc.aceH8H.mount: Deactivated successfully. Jan 23 23:57:10.235487 kubelet[3241]: I0123 23:57:10.234604 3241 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82695ea2-4281-4b27-853c-a668e2f1fb61-kube-api-access-czjb2" (OuterVolumeSpecName: "kube-api-access-czjb2") pod "82695ea2-4281-4b27-853c-a668e2f1fb61" (UID: "82695ea2-4281-4b27-853c-a668e2f1fb61"). InnerVolumeSpecName "kube-api-access-czjb2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:57:10.238300 systemd[1]: var-lib-kubelet-pods-82695ea2\x2d4281\x2d4b27\x2d853c\x2da668e2f1fb61-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dczjb2.mount: Deactivated successfully. Jan 23 23:57:10.250814 kubelet[3241]: I0123 23:57:10.250645 3241 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "82695ea2-4281-4b27-853c-a668e2f1fb61" (UID: "82695ea2-4281-4b27-853c-a668e2f1fb61"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:57:10.254198 systemd[1]: var-lib-kubelet-pods-82695ea2\x2d4281\x2d4b27\x2d853c\x2da668e2f1fb61-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:57:10.267765 systemd-logind[2010]: New session 8 of user core. Jan 23 23:57:10.276962 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:57:10.318729 kubelet[3241]: I0123 23:57:10.317203 3241 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-czjb2\" (UniqueName: \"kubernetes.io/projected/82695ea2-4281-4b27-853c-a668e2f1fb61-kube-api-access-czjb2\") on node \"ip-172-31-28-204\" DevicePath \"\"" Jan 23 23:57:10.318729 kubelet[3241]: I0123 23:57:10.317256 3241 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-ca-bundle\") on node \"ip-172-31-28-204\" DevicePath \"\"" Jan 23 23:57:10.318729 kubelet[3241]: I0123 23:57:10.317282 3241 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/82695ea2-4281-4b27-853c-a668e2f1fb61-whisker-backend-key-pair\") on node \"ip-172-31-28-204\" DevicePath \"\"" Jan 23 23:57:10.498932 systemd[1]: Removed slice kubepods-besteffort-pod82695ea2_4281_4b27_853c_a668e2f1fb61.slice - libcontainer container kubepods-besteffort-pod82695ea2_4281_4b27_853c_a668e2f1fb61.slice. Jan 23 23:57:10.785051 systemd[1]: Created slice kubepods-besteffort-pode0693b48_91c3_4d6b_a757_c65fc3ee493a.slice - libcontainer container kubepods-besteffort-pode0693b48_91c3_4d6b_a757_c65fc3ee493a.slice. Jan 23 23:57:10.823111 kubelet[3241]: I0123 23:57:10.823044 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e0693b48-91c3-4d6b-a757-c65fc3ee493a-whisker-backend-key-pair\") pod \"whisker-6d6bcfdb8b-dvgwk\" (UID: \"e0693b48-91c3-4d6b-a757-c65fc3ee493a\") " pod="calico-system/whisker-6d6bcfdb8b-dvgwk" Jan 23 23:57:10.823393 kubelet[3241]: I0123 23:57:10.823122 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e0693b48-91c3-4d6b-a757-c65fc3ee493a-whisker-ca-bundle\") pod \"whisker-6d6bcfdb8b-dvgwk\" (UID: \"e0693b48-91c3-4d6b-a757-c65fc3ee493a\") " pod="calico-system/whisker-6d6bcfdb8b-dvgwk" Jan 23 23:57:10.823393 kubelet[3241]: I0123 23:57:10.823182 3241 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7jq4\" (UniqueName: \"kubernetes.io/projected/e0693b48-91c3-4d6b-a757-c65fc3ee493a-kube-api-access-z7jq4\") pod \"whisker-6d6bcfdb8b-dvgwk\" (UID: \"e0693b48-91c3-4d6b-a757-c65fc3ee493a\") " pod="calico-system/whisker-6d6bcfdb8b-dvgwk" Jan 23 23:57:11.007673 sshd[4794]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:11.013927 systemd[1]: sshd@7-172.31.28.204:22-4.153.228.146:55832.service: Deactivated successfully. Jan 23 23:57:11.016515 systemd-logind[2010]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:57:11.022116 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:57:11.029369 systemd-logind[2010]: Removed session 8. Jan 23 23:57:11.098352 containerd[2024]: time="2026-01-23T23:57:11.096721561Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6bcfdb8b-dvgwk,Uid:e0693b48-91c3-4d6b-a757-c65fc3ee493a,Namespace:calico-system,Attempt:0,}" Jan 23 23:57:11.541775 (udev-worker)[4950]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:11.569493 systemd-networkd[1938]: cali0668c356275: Link UP Jan 23 23:57:11.572101 systemd-networkd[1938]: cali0668c356275: Gained carrier Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.222 [INFO][4924] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.305 [INFO][4924] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0 whisker-6d6bcfdb8b- calico-system e0693b48-91c3-4d6b-a757-c65fc3ee493a 963 0 2026-01-23 23:57:10 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:6d6bcfdb8b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-28-204 whisker-6d6bcfdb8b-dvgwk eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali0668c356275 [] [] }} ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.305 [INFO][4924] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.405 [INFO][4937] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" HandleID="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Workload="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.406 [INFO][4937] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" HandleID="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Workload="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-204", "pod":"whisker-6d6bcfdb8b-dvgwk", "timestamp":"2026-01-23 23:57:11.405746715 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.406 [INFO][4937] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.406 [INFO][4937] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.406 [INFO][4937] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.426 [INFO][4937] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.439 [INFO][4937] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.451 [INFO][4937] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.459 [INFO][4937] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.465 [INFO][4937] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.465 [INFO][4937] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.476 [INFO][4937] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941 Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.490 [INFO][4937] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.506 [INFO][4937] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.129/26] block=192.168.112.128/26 handle="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.506 [INFO][4937] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.129/26] handle="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" host="ip-172-31-28-204" Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.506 [INFO][4937] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.704789 containerd[2024]: 2026-01-23 23:57:11.506 [INFO][4937] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.129/26] IPv6=[] ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" HandleID="k8s-pod-network.8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Workload="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.513 [INFO][4924] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0", GenerateName:"whisker-6d6bcfdb8b-", Namespace:"calico-system", SelfLink:"", UID:"e0693b48-91c3-4d6b-a757-c65fc3ee493a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d6bcfdb8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"whisker-6d6bcfdb8b-dvgwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0668c356275", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.513 [INFO][4924] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.129/32] ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.513 [INFO][4924] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0668c356275 ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.603 [INFO][4924] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.604 [INFO][4924] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0", GenerateName:"whisker-6d6bcfdb8b-", Namespace:"calico-system", SelfLink:"", UID:"e0693b48-91c3-4d6b-a757-c65fc3ee493a", ResourceVersion:"963", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 57, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"6d6bcfdb8b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941", Pod:"whisker-6d6bcfdb8b-dvgwk", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali0668c356275", MAC:"aa:fe:bd:a9:2f:3a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.706223 containerd[2024]: 2026-01-23 23:57:11.693 [INFO][4924] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941" Namespace="calico-system" Pod="whisker-6d6bcfdb8b-dvgwk" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--6d6bcfdb8b--dvgwk-eth0" Jan 23 23:57:11.754473 containerd[2024]: time="2026-01-23T23:57:11.753458369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:11.754473 containerd[2024]: time="2026-01-23T23:57:11.753609629Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:11.754473 containerd[2024]: time="2026-01-23T23:57:11.753645569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:11.754473 containerd[2024]: time="2026-01-23T23:57:11.753852125Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:11.836268 kubelet[3241]: I0123 23:57:11.835808 3241 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82695ea2-4281-4b27-853c-a668e2f1fb61" path="/var/lib/kubelet/pods/82695ea2-4281-4b27-853c-a668e2f1fb61/volumes" Jan 23 23:57:11.836035 systemd[1]: Started cri-containerd-8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941.scope - libcontainer container 8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941. Jan 23 23:57:11.989695 containerd[2024]: time="2026-01-23T23:57:11.989639574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6d6bcfdb8b-dvgwk,Uid:e0693b48-91c3-4d6b-a757-c65fc3ee493a,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fe20a3b56d3cb19732487e244eaa959cd6042947900c3dc7869f206bf31c941\"" Jan 23 23:57:12.016758 containerd[2024]: time="2026-01-23T23:57:12.016176014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:12.129372 kernel: bpftool[5022]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:57:12.283238 containerd[2024]: time="2026-01-23T23:57:12.283107579Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:12.288121 containerd[2024]: time="2026-01-23T23:57:12.286625283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:12.288121 containerd[2024]: time="2026-01-23T23:57:12.286662375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:12.288506 kubelet[3241]: E0123 23:57:12.288160 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:12.288506 kubelet[3241]: E0123 23:57:12.288231 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:12.292687 kubelet[3241]: E0123 23:57:12.292563 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:671baf049109417185f3e6729fa67078,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:12.296002 containerd[2024]: time="2026-01-23T23:57:12.295559031Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:12.537837 containerd[2024]: time="2026-01-23T23:57:12.536756165Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:12.540307 containerd[2024]: time="2026-01-23T23:57:12.539850689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:12.540307 containerd[2024]: time="2026-01-23T23:57:12.539953241Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:12.541244 kubelet[3241]: E0123 23:57:12.540787 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:12.541244 kubelet[3241]: E0123 23:57:12.540857 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:12.541500 kubelet[3241]: E0123 23:57:12.541117 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:12.542469 kubelet[3241]: E0123 23:57:12.542381 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:57:12.679566 (udev-worker)[4949]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:57:12.689513 systemd-networkd[1938]: vxlan.calico: Link UP Jan 23 23:57:12.689535 systemd-networkd[1938]: vxlan.calico: Gained carrier Jan 23 23:57:12.783712 systemd-networkd[1938]: cali0668c356275: Gained IPv6LL Jan 23 23:57:12.829752 containerd[2024]: time="2026-01-23T23:57:12.828038994Z" level=info msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.940 [INFO][5082] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.941 [INFO][5082] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" iface="eth0" netns="/var/run/netns/cni-52570254-ed76-7107-68fc-391c876abe8d" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.942 [INFO][5082] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" iface="eth0" netns="/var/run/netns/cni-52570254-ed76-7107-68fc-391c876abe8d" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.942 [INFO][5082] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" iface="eth0" netns="/var/run/netns/cni-52570254-ed76-7107-68fc-391c876abe8d" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.942 [INFO][5082] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.942 [INFO][5082] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.992 [INFO][5090] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.993 [INFO][5090] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:12.993 [INFO][5090] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:13.006 [WARNING][5090] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:13.007 [INFO][5090] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:13.009 [INFO][5090] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:13.018531 containerd[2024]: 2026-01-23 23:57:13.012 [INFO][5082] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:13.025645 containerd[2024]: time="2026-01-23T23:57:13.018748215Z" level=info msg="TearDown network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" successfully" Jan 23 23:57:13.025645 containerd[2024]: time="2026-01-23T23:57:13.018786075Z" level=info msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" returns successfully" Jan 23 23:57:13.025645 containerd[2024]: time="2026-01-23T23:57:13.024911991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74cfd6877d-hr9jw,Uid:64d067e9-db06-43a4-8ec2-5418bd9de44b,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:13.027777 systemd[1]: run-netns-cni\x2d52570254\x2ded76\x2d7107\x2d68fc\x2d391c876abe8d.mount: Deactivated successfully. Jan 23 23:57:13.202472 kubelet[3241]: E0123 23:57:13.202243 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:57:13.367461 systemd-networkd[1938]: calia0a10b5544a: Link UP Jan 23 23:57:13.368239 systemd-networkd[1938]: calia0a10b5544a: Gained carrier Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.157 [INFO][5102] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0 calico-kube-controllers-74cfd6877d- calico-system 64d067e9-db06-43a4-8ec2-5418bd9de44b 987 0 2026-01-23 23:56:50 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74cfd6877d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-28-204 calico-kube-controllers-74cfd6877d-hr9jw eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia0a10b5544a [] [] }} ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.158 [INFO][5102] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.267 [INFO][5125] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" HandleID="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.269 [INFO][5125] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" HandleID="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d940), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-204", "pod":"calico-kube-controllers-74cfd6877d-hr9jw", "timestamp":"2026-01-23 23:57:13.267016468 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.269 [INFO][5125] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.270 [INFO][5125] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.270 [INFO][5125] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.288 [INFO][5125] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.309 [INFO][5125] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.318 [INFO][5125] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.322 [INFO][5125] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.326 [INFO][5125] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.326 [INFO][5125] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.329 [INFO][5125] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0 Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.337 [INFO][5125] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.353 [INFO][5125] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.130/26] block=192.168.112.128/26 handle="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.354 [INFO][5125] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.130/26] handle="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" host="ip-172-31-28-204" Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.354 [INFO][5125] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:13.410645 containerd[2024]: 2026-01-23 23:57:13.354 [INFO][5125] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.130/26] IPv6=[] ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" HandleID="k8s-pod-network.4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.358 [INFO][5102] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0", GenerateName:"calico-kube-controllers-74cfd6877d-", Namespace:"calico-system", SelfLink:"", UID:"64d067e9-db06-43a4-8ec2-5418bd9de44b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74cfd6877d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"calico-kube-controllers-74cfd6877d-hr9jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0a10b5544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.358 [INFO][5102] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.130/32] ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.359 [INFO][5102] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia0a10b5544a ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.365 [INFO][5102] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.367 [INFO][5102] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0", GenerateName:"calico-kube-controllers-74cfd6877d-", Namespace:"calico-system", SelfLink:"", UID:"64d067e9-db06-43a4-8ec2-5418bd9de44b", ResourceVersion:"987", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74cfd6877d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0", Pod:"calico-kube-controllers-74cfd6877d-hr9jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0a10b5544a", MAC:"c2:79:6d:a1:fb:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:13.413419 containerd[2024]: 2026-01-23 23:57:13.405 [INFO][5102] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0" Namespace="calico-system" Pod="calico-kube-controllers-74cfd6877d-hr9jw" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:13.463594 containerd[2024]: time="2026-01-23T23:57:13.462671609Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:13.463594 containerd[2024]: time="2026-01-23T23:57:13.462964217Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:13.463594 containerd[2024]: time="2026-01-23T23:57:13.463069217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:13.463837 containerd[2024]: time="2026-01-23T23:57:13.463571681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:13.524636 systemd[1]: Started cri-containerd-4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0.scope - libcontainer container 4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0. Jan 23 23:57:13.598813 containerd[2024]: time="2026-01-23T23:57:13.598753530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74cfd6877d-hr9jw,Uid:64d067e9-db06-43a4-8ec2-5418bd9de44b,Namespace:calico-system,Attempt:1,} returns sandbox id \"4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0\"" Jan 23 23:57:13.604763 containerd[2024]: time="2026-01-23T23:57:13.604696974Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:13.829232 containerd[2024]: time="2026-01-23T23:57:13.828939307Z" level=info msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" Jan 23 23:57:13.833166 containerd[2024]: time="2026-01-23T23:57:13.829872571Z" level=info msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" Jan 23 23:57:13.849064 containerd[2024]: time="2026-01-23T23:57:13.848989495Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:13.853417 containerd[2024]: time="2026-01-23T23:57:13.853274383Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:13.854926 containerd[2024]: time="2026-01-23T23:57:13.853766287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:13.855200 kubelet[3241]: E0123 23:57:13.854522 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:13.855200 kubelet[3241]: E0123 23:57:13.854591 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:13.855200 kubelet[3241]: E0123 23:57:13.854793 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxpxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:13.859360 kubelet[3241]: E0123 23:57:13.856283 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:13.872431 systemd-networkd[1938]: vxlan.calico: Gained IPv6LL Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.979 [INFO][5224] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.979 [INFO][5224] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" iface="eth0" netns="/var/run/netns/cni-95229f6e-0653-7421-3d44-7525b1a81adf" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.982 [INFO][5224] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" iface="eth0" netns="/var/run/netns/cni-95229f6e-0653-7421-3d44-7525b1a81adf" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.982 [INFO][5224] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" iface="eth0" netns="/var/run/netns/cni-95229f6e-0653-7421-3d44-7525b1a81adf" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.982 [INFO][5224] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:13.982 [INFO][5224] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.036 [INFO][5242] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.037 [INFO][5242] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.037 [INFO][5242] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.056 [WARNING][5242] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.056 [INFO][5242] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.062 [INFO][5242] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:14.070538 containerd[2024]: 2026-01-23 23:57:14.066 [INFO][5224] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:14.076667 containerd[2024]: time="2026-01-23T23:57:14.070944112Z" level=info msg="TearDown network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" successfully" Jan 23 23:57:14.076667 containerd[2024]: time="2026-01-23T23:57:14.070988788Z" level=info msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" returns successfully" Jan 23 23:57:14.076667 containerd[2024]: time="2026-01-23T23:57:14.074989204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rn45p,Uid:46c86ab0-1223-4a22-bfcf-7f463abcf340,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:14.081131 systemd[1]: run-netns-cni\x2d95229f6e\x2d0653\x2d7421\x2d3d44\x2d7525b1a81adf.mount: Deactivated successfully. Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.970 [INFO][5225] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.971 [INFO][5225] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" iface="eth0" netns="/var/run/netns/cni-d4cd77d3-2290-1d5d-f4ef-830f53bac991" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.972 [INFO][5225] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" iface="eth0" netns="/var/run/netns/cni-d4cd77d3-2290-1d5d-f4ef-830f53bac991" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.973 [INFO][5225] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" iface="eth0" netns="/var/run/netns/cni-d4cd77d3-2290-1d5d-f4ef-830f53bac991" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.973 [INFO][5225] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:13.973 [INFO][5225] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.041 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.042 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.062 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.091 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.094 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.099 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:14.113696 containerd[2024]: 2026-01-23 23:57:14.103 [INFO][5225] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:14.118353 containerd[2024]: time="2026-01-23T23:57:14.116467528Z" level=info msg="TearDown network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" successfully" Jan 23 23:57:14.119131 containerd[2024]: time="2026-01-23T23:57:14.118529261Z" level=info msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" returns successfully" Jan 23 23:57:14.120103 containerd[2024]: time="2026-01-23T23:57:14.119550845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4bjn,Uid:91093aed-82d4-44e1-9d6b-e10aeaeca718,Namespace:kube-system,Attempt:1,}" Jan 23 23:57:14.130230 systemd[1]: run-netns-cni\x2dd4cd77d3\x2d2290\x2d1d5d\x2df4ef\x2d830f53bac991.mount: Deactivated successfully. Jan 23 23:57:14.213547 kubelet[3241]: E0123 23:57:14.212920 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:14.443206 systemd-networkd[1938]: calie2781ce6a36: Link UP Jan 23 23:57:14.447570 systemd-networkd[1938]: calie2781ce6a36: Gained carrier Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.218 [INFO][5254] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0 csi-node-driver- calico-system 46c86ab0-1223-4a22-bfcf-7f463abcf340 1012 0 2026-01-23 23:56:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-28-204 csi-node-driver-rn45p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calie2781ce6a36 [] [] }} ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.218 [INFO][5254] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.360 [INFO][5278] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" HandleID="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.361 [INFO][5278] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" HandleID="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036ed50), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-204", "pod":"csi-node-driver-rn45p", "timestamp":"2026-01-23 23:57:14.360543402 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.361 [INFO][5278] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.361 [INFO][5278] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.362 [INFO][5278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.377 [INFO][5278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.386 [INFO][5278] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.394 [INFO][5278] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.400 [INFO][5278] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.406 [INFO][5278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.406 [INFO][5278] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.410 [INFO][5278] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.416 [INFO][5278] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.427 [INFO][5278] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.131/26] block=192.168.112.128/26 handle="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.428 [INFO][5278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.131/26] handle="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" host="ip-172-31-28-204" Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.428 [INFO][5278] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:14.496568 containerd[2024]: 2026-01-23 23:57:14.428 [INFO][5278] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.131/26] IPv6=[] ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" HandleID="k8s-pod-network.60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.433 [INFO][5254] cni-plugin/k8s.go 418: Populated endpoint ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46c86ab0-1223-4a22-bfcf-7f463abcf340", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"csi-node-driver-rn45p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2781ce6a36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.435 [INFO][5254] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.131/32] ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.435 [INFO][5254] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie2781ce6a36 ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.448 [INFO][5254] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.450 [INFO][5254] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46c86ab0-1223-4a22-bfcf-7f463abcf340", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b", Pod:"csi-node-driver-rn45p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2781ce6a36", MAC:"c6:af:b2:9c:6d:07", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:14.499964 containerd[2024]: 2026-01-23 23:57:14.485 [INFO][5254] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b" Namespace="calico-system" Pod="csi-node-driver-rn45p" WorkloadEndpoint="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:14.582496 containerd[2024]: time="2026-01-23T23:57:14.581289115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:14.582496 containerd[2024]: time="2026-01-23T23:57:14.581415175Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:14.582496 containerd[2024]: time="2026-01-23T23:57:14.581442451Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:14.582496 containerd[2024]: time="2026-01-23T23:57:14.581598955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:14.625870 systemd-networkd[1938]: calia3ec842652e: Link UP Jan 23 23:57:14.629196 systemd-networkd[1938]: calia3ec842652e: Gained carrier Jan 23 23:57:14.639758 systemd[1]: Started cri-containerd-60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b.scope - libcontainer container 60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b. Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.283 [INFO][5264] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0 coredns-674b8bbfcf- kube-system 91093aed-82d4-44e1-9d6b-e10aeaeca718 1011 0 2026-01-23 23:56:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-204 coredns-674b8bbfcf-d4bjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia3ec842652e [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.284 [INFO][5264] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.363 [INFO][5283] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" HandleID="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.365 [INFO][5283] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" HandleID="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d36d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-204", "pod":"coredns-674b8bbfcf-d4bjn", "timestamp":"2026-01-23 23:57:14.36389913 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.366 [INFO][5283] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.429 [INFO][5283] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.429 [INFO][5283] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.484 [INFO][5283] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.505 [INFO][5283] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.523 [INFO][5283] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.538 [INFO][5283] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.547 [INFO][5283] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.547 [INFO][5283] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.555 [INFO][5283] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.572 [INFO][5283] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.590 [INFO][5283] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.132/26] block=192.168.112.128/26 handle="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.590 [INFO][5283] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.132/26] handle="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" host="ip-172-31-28-204" Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.590 [INFO][5283] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:14.699419 containerd[2024]: 2026-01-23 23:57:14.590 [INFO][5283] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.132/26] IPv6=[] ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" HandleID="k8s-pod-network.d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.619 [INFO][5264] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91093aed-82d4-44e1-9d6b-e10aeaeca718", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"coredns-674b8bbfcf-d4bjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ec842652e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.620 [INFO][5264] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.132/32] ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.620 [INFO][5264] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia3ec842652e ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.630 [INFO][5264] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.636 [INFO][5264] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91093aed-82d4-44e1-9d6b-e10aeaeca718", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b", Pod:"coredns-674b8bbfcf-d4bjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ec842652e", MAC:"42:23:58:97:87:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:14.700598 containerd[2024]: 2026-01-23 23:57:14.681 [INFO][5264] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b" Namespace="kube-system" Pod="coredns-674b8bbfcf-d4bjn" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:14.705576 systemd-networkd[1938]: calia0a10b5544a: Gained IPv6LL Jan 23 23:57:14.760102 containerd[2024]: time="2026-01-23T23:57:14.759858488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:14.760503 containerd[2024]: time="2026-01-23T23:57:14.760046828Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:14.760503 containerd[2024]: time="2026-01-23T23:57:14.760362404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:14.761127 containerd[2024]: time="2026-01-23T23:57:14.760970828Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:14.819404 containerd[2024]: time="2026-01-23T23:57:14.819018632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-rn45p,Uid:46c86ab0-1223-4a22-bfcf-7f463abcf340,Namespace:calico-system,Attempt:1,} returns sandbox id \"60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b\"" Jan 23 23:57:14.826917 containerd[2024]: time="2026-01-23T23:57:14.826851908Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:14.829788 containerd[2024]: time="2026-01-23T23:57:14.828437612Z" level=info msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" Jan 23 23:57:14.830785 systemd[1]: Started cri-containerd-d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b.scope - libcontainer container d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b. Jan 23 23:57:14.958233 containerd[2024]: time="2026-01-23T23:57:14.957138741Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-d4bjn,Uid:91093aed-82d4-44e1-9d6b-e10aeaeca718,Namespace:kube-system,Attempt:1,} returns sandbox id \"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b\"" Jan 23 23:57:14.994122 containerd[2024]: time="2026-01-23T23:57:14.994037865Z" level=info msg="CreateContainer within sandbox \"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:15.030720 containerd[2024]: time="2026-01-23T23:57:15.030638789Z" level=info msg="CreateContainer within sandbox \"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd10d6d63fd1e9cb6397a6471717865c8321cef012b42029a53daa55f5cb5f8f\"" Jan 23 23:57:15.032358 containerd[2024]: time="2026-01-23T23:57:15.032257505Z" level=info msg="StartContainer for \"fd10d6d63fd1e9cb6397a6471717865c8321cef012b42029a53daa55f5cb5f8f\"" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.971 [INFO][5388] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.971 [INFO][5388] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" iface="eth0" netns="/var/run/netns/cni-07abdbc7-b905-a45c-fff4-2cc2ee6a90c4" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.972 [INFO][5388] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" iface="eth0" netns="/var/run/netns/cni-07abdbc7-b905-a45c-fff4-2cc2ee6a90c4" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.976 [INFO][5388] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" iface="eth0" netns="/var/run/netns/cni-07abdbc7-b905-a45c-fff4-2cc2ee6a90c4" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.976 [INFO][5388] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:14.976 [INFO][5388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.043 [INFO][5408] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.044 [INFO][5408] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.044 [INFO][5408] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.072 [WARNING][5408] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.072 [INFO][5408] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.078 [INFO][5408] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:15.091509 containerd[2024]: 2026-01-23 23:57:15.083 [INFO][5388] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:15.094500 containerd[2024]: time="2026-01-23T23:57:15.093743945Z" level=info msg="TearDown network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" successfully" Jan 23 23:57:15.094500 containerd[2024]: time="2026-01-23T23:57:15.093791945Z" level=info msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" returns successfully" Jan 23 23:57:15.097057 containerd[2024]: time="2026-01-23T23:57:15.096732965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q6gtf,Uid:40024e0b-dc12-464a-9bd9-6f315f803fe4,Namespace:calico-system,Attempt:1,}" Jan 23 23:57:15.107062 containerd[2024]: time="2026-01-23T23:57:15.106818869Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:15.109262 containerd[2024]: time="2026-01-23T23:57:15.109169333Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:15.109509 containerd[2024]: time="2026-01-23T23:57:15.109424705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:15.109863 systemd[1]: Started cri-containerd-fd10d6d63fd1e9cb6397a6471717865c8321cef012b42029a53daa55f5cb5f8f.scope - libcontainer container fd10d6d63fd1e9cb6397a6471717865c8321cef012b42029a53daa55f5cb5f8f. Jan 23 23:57:15.112639 kubelet[3241]: E0123 23:57:15.111118 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:15.112639 kubelet[3241]: E0123 23:57:15.111180 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:15.123168 kubelet[3241]: E0123 23:57:15.122797 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:15.127794 containerd[2024]: time="2026-01-23T23:57:15.127440738Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:15.208154 containerd[2024]: time="2026-01-23T23:57:15.208079010Z" level=info msg="StartContainer for \"fd10d6d63fd1e9cb6397a6471717865c8321cef012b42029a53daa55f5cb5f8f\" returns successfully" Jan 23 23:57:15.255673 kubelet[3241]: E0123 23:57:15.254612 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:15.335232 kubelet[3241]: I0123 23:57:15.334164 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-d4bjn" podStartSLOduration=55.334134547 podStartE2EDuration="55.334134547s" podCreationTimestamp="2026-01-23 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:15.331920667 +0000 UTC m=+59.794435870" watchObservedRunningTime="2026-01-23 23:57:15.334134547 +0000 UTC m=+59.796649738" Jan 23 23:57:15.451605 containerd[2024]: time="2026-01-23T23:57:15.451549591Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:15.455352 containerd[2024]: time="2026-01-23T23:57:15.454457743Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:15.455663 containerd[2024]: time="2026-01-23T23:57:15.454533319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:15.456193 kubelet[3241]: E0123 23:57:15.456119 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:15.456352 kubelet[3241]: E0123 23:57:15.456230 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:15.467878 kubelet[3241]: E0123 23:57:15.466939 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:15.470399 kubelet[3241]: E0123 23:57:15.468611 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:15.490728 systemd[1]: run-netns-cni\x2d07abdbc7\x2db905\x2da45c\x2dfff4\x2d2cc2ee6a90c4.mount: Deactivated successfully. Jan 23 23:57:15.501738 systemd-networkd[1938]: cali29b03597218: Link UP Jan 23 23:57:15.503403 systemd-networkd[1938]: cali29b03597218: Gained carrier Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.256 [INFO][5441] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0 goldmane-666569f655- calico-system 40024e0b-dc12-464a-9bd9-6f315f803fe4 1029 0 2026-01-23 23:56:46 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-28-204 goldmane-666569f655-q6gtf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali29b03597218 [] [] }} ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.257 [INFO][5441] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.337 [INFO][5463] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" HandleID="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.338 [INFO][5463] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" HandleID="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3050), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-28-204", "pod":"goldmane-666569f655-q6gtf", "timestamp":"2026-01-23 23:57:15.337924003 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.338 [INFO][5463] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.338 [INFO][5463] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.338 [INFO][5463] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.367 [INFO][5463] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.379 [INFO][5463] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.413 [INFO][5463] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.417 [INFO][5463] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.427 [INFO][5463] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.427 [INFO][5463] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.433 [INFO][5463] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126 Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.441 [INFO][5463] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.467 [INFO][5463] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.133/26] block=192.168.112.128/26 handle="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.468 [INFO][5463] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.133/26] handle="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" host="ip-172-31-28-204" Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.469 [INFO][5463] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:15.561190 containerd[2024]: 2026-01-23 23:57:15.469 [INFO][5463] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.133/26] IPv6=[] ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" HandleID="k8s-pod-network.727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.477 [INFO][5441] cni-plugin/k8s.go 418: Populated endpoint ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40024e0b-dc12-464a-9bd9-6f315f803fe4", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"goldmane-666569f655-q6gtf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29b03597218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.478 [INFO][5441] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.133/32] ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.478 [INFO][5441] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29b03597218 ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.508 [INFO][5441] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.514 [INFO][5441] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40024e0b-dc12-464a-9bd9-6f315f803fe4", ResourceVersion:"1029", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126", Pod:"goldmane-666569f655-q6gtf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29b03597218", MAC:"66:30:75:02:15:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:15.564622 containerd[2024]: 2026-01-23 23:57:15.551 [INFO][5441] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126" Namespace="calico-system" Pod="goldmane-666569f655-q6gtf" WorkloadEndpoint="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:15.614512 containerd[2024]: time="2026-01-23T23:57:15.612716912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:15.614512 containerd[2024]: time="2026-01-23T23:57:15.612823952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:15.614512 containerd[2024]: time="2026-01-23T23:57:15.612884300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:15.614512 containerd[2024]: time="2026-01-23T23:57:15.613058132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:15.669438 systemd[1]: Started cri-containerd-727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126.scope - libcontainer container 727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126. Jan 23 23:57:15.761072 containerd[2024]: time="2026-01-23T23:57:15.760824957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-q6gtf,Uid:40024e0b-dc12-464a-9bd9-6f315f803fe4,Namespace:calico-system,Attempt:1,} returns sandbox id \"727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126\"" Jan 23 23:57:15.780693 containerd[2024]: time="2026-01-23T23:57:15.780279561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:15.789026 containerd[2024]: time="2026-01-23T23:57:15.788960061Z" level=info msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" Jan 23 23:57:15.829342 containerd[2024]: time="2026-01-23T23:57:15.829089177Z" level=info msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" Jan 23 23:57:15.830864 containerd[2024]: time="2026-01-23T23:57:15.830802837Z" level=info msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" Jan 23 23:57:16.041443 containerd[2024]: time="2026-01-23T23:57:16.041179626Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:16.047564 containerd[2024]: time="2026-01-23T23:57:16.045894006Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:16.047564 containerd[2024]: time="2026-01-23T23:57:16.046065210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:16.047790 kubelet[3241]: E0123 23:57:16.046276 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:16.047790 kubelet[3241]: E0123 23:57:16.046989 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:16.047790 kubelet[3241]: E0123 23:57:16.047270 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47fj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:16.051649 kubelet[3241]: E0123 23:57:16.048819 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:16.050467 systemd-networkd[1938]: calie2781ce6a36: Gained IPv6LL Jan 23 23:57:16.112399 systemd[1]: Started sshd@8-172.31.28.204:22-4.153.228.146:56374.service - OpenSSH per-connection server daemon (4.153.228.146:56374). Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:15.944 [WARNING][5529] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0", GenerateName:"calico-kube-controllers-74cfd6877d-", Namespace:"calico-system", SelfLink:"", UID:"64d067e9-db06-43a4-8ec2-5418bd9de44b", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74cfd6877d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0", Pod:"calico-kube-controllers-74cfd6877d-hr9jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0a10b5544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:15.948 [INFO][5529] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:15.948 [INFO][5529] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" iface="eth0" netns="" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:15.949 [INFO][5529] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:15.949 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.062 [INFO][5566] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.063 [INFO][5566] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.063 [INFO][5566] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.091 [WARNING][5566] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.091 [INFO][5566] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.100 [INFO][5566] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.125722 containerd[2024]: 2026-01-23 23:57:16.116 [INFO][5529] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.127036 containerd[2024]: time="2026-01-23T23:57:16.126855642Z" level=info msg="TearDown network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" successfully" Jan 23 23:57:16.127036 containerd[2024]: time="2026-01-23T23:57:16.126904902Z" level=info msg="StopPodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" returns successfully" Jan 23 23:57:16.128082 containerd[2024]: time="2026-01-23T23:57:16.128034066Z" level=info msg="RemovePodSandbox for \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" Jan 23 23:57:16.130975 containerd[2024]: time="2026-01-23T23:57:16.130591315Z" level=info msg="Forcibly stopping sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\"" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.007 [INFO][5554] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.008 [INFO][5554] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" iface="eth0" netns="/var/run/netns/cni-9791a1ef-b8b5-010b-3f64-706fda5b4d67" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.009 [INFO][5554] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" iface="eth0" netns="/var/run/netns/cni-9791a1ef-b8b5-010b-3f64-706fda5b4d67" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.011 [INFO][5554] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" iface="eth0" netns="/var/run/netns/cni-9791a1ef-b8b5-010b-3f64-706fda5b4d67" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.011 [INFO][5554] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.011 [INFO][5554] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.135 [INFO][5574] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.136 [INFO][5574] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.137 [INFO][5574] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.190 [WARNING][5574] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.191 [INFO][5574] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.196 [INFO][5574] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.212506 containerd[2024]: 2026-01-23 23:57:16.202 [INFO][5554] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:57:16.214585 containerd[2024]: time="2026-01-23T23:57:16.214503451Z" level=info msg="TearDown network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" successfully" Jan 23 23:57:16.214585 containerd[2024]: time="2026-01-23T23:57:16.214570231Z" level=info msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" returns successfully" Jan 23 23:57:16.217376 containerd[2024]: time="2026-01-23T23:57:16.216831427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6s8v5,Uid:9f58d96e-4844-4626-a760-be9823990f64,Namespace:kube-system,Attempt:1,}" Jan 23 23:57:16.220972 systemd[1]: run-netns-cni\x2d9791a1ef\x2db8b5\x2d010b\x2d3f64\x2d706fda5b4d67.mount: Deactivated successfully. Jan 23 23:57:16.303630 kubelet[3241]: E0123 23:57:16.301773 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:16.304256 systemd-networkd[1938]: calia3ec842652e: Gained IPv6LL Jan 23 23:57:16.312445 kubelet[3241]: E0123 23:57:16.311747 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.085 [INFO][5555] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.086 [INFO][5555] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" iface="eth0" netns="/var/run/netns/cni-846ea016-dc38-8d26-bf28-b5358a893570" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.095 [INFO][5555] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" iface="eth0" netns="/var/run/netns/cni-846ea016-dc38-8d26-bf28-b5358a893570" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.111 [INFO][5555] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" iface="eth0" netns="/var/run/netns/cni-846ea016-dc38-8d26-bf28-b5358a893570" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.118 [INFO][5555] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.118 [INFO][5555] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.332 [INFO][5585] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.333 [INFO][5585] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.333 [INFO][5585] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.390 [WARNING][5585] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.390 [INFO][5585] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.410 [INFO][5585] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.428248 containerd[2024]: 2026-01-23 23:57:16.420 [INFO][5555] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:57:16.430844 containerd[2024]: time="2026-01-23T23:57:16.428493548Z" level=info msg="TearDown network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" successfully" Jan 23 23:57:16.430844 containerd[2024]: time="2026-01-23T23:57:16.428533760Z" level=info msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" returns successfully" Jan 23 23:57:16.430844 containerd[2024]: time="2026-01-23T23:57:16.430265216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-ddg9z,Uid:7d79c384-4d50-4538-9d9a-312b65c47eb8,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:57:16.476206 systemd[1]: run-netns-cni\x2d846ea016\x2ddc38\x2d8d26\x2dbf28\x2db5358a893570.mount: Deactivated successfully. Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.445 [WARNING][5597] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0", GenerateName:"calico-kube-controllers-74cfd6877d-", Namespace:"calico-system", SelfLink:"", UID:"64d067e9-db06-43a4-8ec2-5418bd9de44b", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74cfd6877d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"4172dfd37e26ff1ba8d99761cf5e9d2c614e8177224ddcc65c0b075f53fa1bc0", Pod:"calico-kube-controllers-74cfd6877d-hr9jw", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia0a10b5544a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.447 [INFO][5597] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.447 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" iface="eth0" netns="" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.447 [INFO][5597] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.447 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.600 [INFO][5620] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.601 [INFO][5620] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.601 [INFO][5620] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.641 [WARNING][5620] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.642 [INFO][5620] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" HandleID="k8s-pod-network.e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Workload="ip--172--31--28--204-k8s-calico--kube--controllers--74cfd6877d--hr9jw-eth0" Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.648 [INFO][5620] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.658666 containerd[2024]: 2026-01-23 23:57:16.653 [INFO][5597] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f" Jan 23 23:57:16.660968 containerd[2024]: time="2026-01-23T23:57:16.659675589Z" level=info msg="TearDown network for sandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" successfully" Jan 23 23:57:16.677127 containerd[2024]: time="2026-01-23T23:57:16.676436601Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:16.677127 containerd[2024]: time="2026-01-23T23:57:16.676537605Z" level=info msg="RemovePodSandbox \"e8d9252d9177e83b9238b00fa301706542ef123aaaeb17bd521c8499990c456f\" returns successfully" Jan 23 23:57:16.678282 containerd[2024]: time="2026-01-23T23:57:16.677946417Z" level=info msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" Jan 23 23:57:16.710822 sshd[5582]: Accepted publickey for core from 4.153.228.146 port 56374 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:16.718791 sshd[5582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:16.740053 systemd-logind[2010]: New session 9 of user core. Jan 23 23:57:16.746685 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:57:16.830946 containerd[2024]: time="2026-01-23T23:57:16.830512198Z" level=info msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" Jan 23 23:57:16.906058 systemd-networkd[1938]: cali5c3077ac8bf: Link UP Jan 23 23:57:16.914165 systemd-networkd[1938]: cali5c3077ac8bf: Gained carrier Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.554 [INFO][5604] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0 coredns-674b8bbfcf- kube-system 9f58d96e-4844-4626-a760-be9823990f64 1057 0 2026-01-23 23:56:20 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-28-204 coredns-674b8bbfcf-6s8v5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5c3077ac8bf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.555 [INFO][5604] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.668 [INFO][5638] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" HandleID="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.670 [INFO][5638] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" HandleID="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004dd30), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-28-204", "pod":"coredns-674b8bbfcf-6s8v5", "timestamp":"2026-01-23 23:57:16.668802873 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.671 [INFO][5638] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.671 [INFO][5638] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.671 [INFO][5638] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.717 [INFO][5638] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.762 [INFO][5638] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.786 [INFO][5638] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.790 [INFO][5638] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.795 [INFO][5638] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.797 [INFO][5638] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.801 [INFO][5638] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.813 [INFO][5638] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.835 [INFO][5638] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.134/26] block=192.168.112.128/26 handle="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.836 [INFO][5638] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.134/26] handle="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" host="ip-172-31-28-204" Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.837 [INFO][5638] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:16.980596 containerd[2024]: 2026-01-23 23:57:16.839 [INFO][5638] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.134/26] IPv6=[] ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" HandleID="k8s-pod-network.bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.868 [INFO][5604] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f58d96e-4844-4626-a760-be9823990f64", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"coredns-674b8bbfcf-6s8v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c3077ac8bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.869 [INFO][5604] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.134/32] ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.869 [INFO][5604] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5c3077ac8bf ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.919 [INFO][5604] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.923 [INFO][5604] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f58d96e-4844-4626-a760-be9823990f64", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd", Pod:"coredns-674b8bbfcf-6s8v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c3077ac8bf", MAC:"1e:ca:1a:d2:2a:44", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:16.981822 containerd[2024]: 2026-01-23 23:57:16.967 [INFO][5604] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd" Namespace="kube-system" Pod="coredns-674b8bbfcf-6s8v5" WorkloadEndpoint="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:57:17.134743 systemd-networkd[1938]: cali4b2b2744c9c: Link UP Jan 23 23:57:17.135228 systemd-networkd[1938]: cali4b2b2744c9c: Gained carrier Jan 23 23:57:17.140868 containerd[2024]: time="2026-01-23T23:57:17.139972856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:17.149216 containerd[2024]: time="2026-01-23T23:57:17.148581020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:17.149216 containerd[2024]: time="2026-01-23T23:57:17.148632824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.149216 containerd[2024]: time="2026-01-23T23:57:17.148799492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:16.869 [WARNING][5663] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91093aed-82d4-44e1-9d6b-e10aeaeca718", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b", Pod:"coredns-674b8bbfcf-d4bjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ec842652e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:16.871 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:16.871 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" iface="eth0" netns="" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:16.871 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:16.871 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.078 [INFO][5681] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.079 [INFO][5681] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.080 [INFO][5681] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.134 [WARNING][5681] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.134 [INFO][5681] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.146 [INFO][5681] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:17.209011 containerd[2024]: 2026-01-23 23:57:17.176 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.209011 containerd[2024]: time="2026-01-23T23:57:17.206726264Z" level=info msg="TearDown network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" successfully" Jan 23 23:57:17.209011 containerd[2024]: time="2026-01-23T23:57:17.206764880Z" level=info msg="StopPodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" returns successfully" Jan 23 23:57:17.209011 containerd[2024]: time="2026-01-23T23:57:17.207731000Z" level=info msg="RemovePodSandbox for \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" Jan 23 23:57:17.209011 containerd[2024]: time="2026-01-23T23:57:17.207783800Z" level=info msg="Forcibly stopping sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\"" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.643 [INFO][5626] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0 calico-apiserver-6976454ff7- calico-apiserver 7d79c384-4d50-4538-9d9a-312b65c47eb8 1058 0 2026-01-23 23:56:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6976454ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-204 calico-apiserver-6976454ff7-ddg9z eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4b2b2744c9c [] [] }} ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.643 [INFO][5626] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.750 [INFO][5649] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" HandleID="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.750 [INFO][5649] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" HandleID="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c13c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-204", "pod":"calico-apiserver-6976454ff7-ddg9z", "timestamp":"2026-01-23 23:57:16.750184126 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.750 [INFO][5649] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.837 [INFO][5649] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.841 [INFO][5649] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.904 [INFO][5649] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.937 [INFO][5649] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.951 [INFO][5649] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.963 [INFO][5649] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.972 [INFO][5649] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.973 [INFO][5649] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:16.984 [INFO][5649] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2 Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:17.008 [INFO][5649] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:17.045 [INFO][5649] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.135/26] block=192.168.112.128/26 handle="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:17.045 [INFO][5649] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.135/26] handle="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" host="ip-172-31-28-204" Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:17.045 [INFO][5649] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:17.255025 containerd[2024]: 2026-01-23 23:57:17.045 [INFO][5649] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.135/26] IPv6=[] ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" HandleID="k8s-pod-network.efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.079 [INFO][5626] cni-plugin/k8s.go 418: Populated endpoint ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d79c384-4d50-4538-9d9a-312b65c47eb8", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"calico-apiserver-6976454ff7-ddg9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b2b2744c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.079 [INFO][5626] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.135/32] ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.090 [INFO][5626] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4b2b2744c9c ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.137 [INFO][5626] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.149 [INFO][5626] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d79c384-4d50-4538-9d9a-312b65c47eb8", ResourceVersion:"1058", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2", Pod:"calico-apiserver-6976454ff7-ddg9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b2b2744c9c", MAC:"5e:fc:61:f6:a6:b2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:17.258529 containerd[2024]: 2026-01-23 23:57:17.233 [INFO][5626] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-ddg9z" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:57:17.265638 systemd-networkd[1938]: cali29b03597218: Gained IPv6LL Jan 23 23:57:17.325670 systemd[1]: Started cri-containerd-bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd.scope - libcontainer container bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd. Jan 23 23:57:17.370508 kubelet[3241]: E0123 23:57:17.369708 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:17.482888 containerd[2024]: time="2026-01-23T23:57:17.482454933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:17.487933 containerd[2024]: time="2026-01-23T23:57:17.482593965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:17.487933 containerd[2024]: time="2026-01-23T23:57:17.487103673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.488172 containerd[2024]: time="2026-01-23T23:57:17.487820313Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:17.554573 sshd[5582]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:17.570592 systemd[1]: sshd@8-172.31.28.204:22-4.153.228.146:56374.service: Deactivated successfully. Jan 23 23:57:17.581048 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:57:17.586607 systemd-logind[2010]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:57:17.590150 systemd-logind[2010]: Removed session 9. Jan 23 23:57:17.685200 systemd[1]: Started cri-containerd-efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2.scope - libcontainer container efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2. Jan 23 23:57:17.769158 containerd[2024]: time="2026-01-23T23:57:17.768577031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6s8v5,Uid:9f58d96e-4844-4626-a760-be9823990f64,Namespace:kube-system,Attempt:1,} returns sandbox id \"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd\"" Jan 23 23:57:17.791796 containerd[2024]: time="2026-01-23T23:57:17.790893023Z" level=info msg="CreateContainer within sandbox \"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.449 [INFO][5682] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.449 [INFO][5682] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" iface="eth0" netns="/var/run/netns/cni-adb0a156-7329-0f65-67ad-8a4668a63394" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.450 [INFO][5682] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" iface="eth0" netns="/var/run/netns/cni-adb0a156-7329-0f65-67ad-8a4668a63394" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.453 [INFO][5682] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" iface="eth0" netns="/var/run/netns/cni-adb0a156-7329-0f65-67ad-8a4668a63394" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.453 [INFO][5682] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.453 [INFO][5682] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.749 [INFO][5779] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.749 [INFO][5779] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.750 [INFO][5779] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.794 [WARNING][5779] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.795 [INFO][5779] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.801 [INFO][5779] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:17.817486 containerd[2024]: 2026-01-23 23:57:17.806 [INFO][5682] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:57:17.821787 containerd[2024]: time="2026-01-23T23:57:17.819078251Z" level=info msg="TearDown network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" successfully" Jan 23 23:57:17.821787 containerd[2024]: time="2026-01-23T23:57:17.819151235Z" level=info msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" returns successfully" Jan 23 23:57:17.824490 containerd[2024]: time="2026-01-23T23:57:17.824263763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-t76qf,Uid:74258f81-20b6-4c16-8e17-d994c72b6c19,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:57:17.830672 systemd[1]: run-netns-cni\x2dadb0a156\x2d7329\x2d0f65\x2d67ad\x2d8a4668a63394.mount: Deactivated successfully. Jan 23 23:57:17.879157 containerd[2024]: time="2026-01-23T23:57:17.879080771Z" level=info msg="CreateContainer within sandbox \"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fc847cf60838c3feeb2f432a6417b644ef6a13220f7aef82166d114068051634\"" Jan 23 23:57:17.882355 containerd[2024]: time="2026-01-23T23:57:17.881579807Z" level=info msg="StartContainer for \"fc847cf60838c3feeb2f432a6417b644ef6a13220f7aef82166d114068051634\"" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.604 [WARNING][5745] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"91093aed-82d4-44e1-9d6b-e10aeaeca718", ResourceVersion:"1042", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"d576c05aefdca59653c26c964a76f803e9b1e886609f43dfca2b86e2e0ac918b", Pod:"coredns-674b8bbfcf-d4bjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia3ec842652e", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.605 [INFO][5745] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.605 [INFO][5745] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" iface="eth0" netns="" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.605 [INFO][5745] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.605 [INFO][5745] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.868 [INFO][5799] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.868 [INFO][5799] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.869 [INFO][5799] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.901 [WARNING][5799] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.901 [INFO][5799] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" HandleID="k8s-pod-network.4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--d4bjn-eth0" Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.908 [INFO][5799] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:17.928372 containerd[2024]: 2026-01-23 23:57:17.917 [INFO][5745] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6" Jan 23 23:57:17.928372 containerd[2024]: time="2026-01-23T23:57:17.927970331Z" level=info msg="TearDown network for sandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" successfully" Jan 23 23:57:17.947373 containerd[2024]: time="2026-01-23T23:57:17.946659216Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:17.947731 containerd[2024]: time="2026-01-23T23:57:17.947295888Z" level=info msg="RemovePodSandbox \"4411e6d89290bc233b7ebda315e0a368d75a1c53855b8dec0542a6dafc29cec6\" returns successfully" Jan 23 23:57:17.949330 containerd[2024]: time="2026-01-23T23:57:17.949235496Z" level=info msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" Jan 23 23:57:18.005232 systemd[1]: Started cri-containerd-fc847cf60838c3feeb2f432a6417b644ef6a13220f7aef82166d114068051634.scope - libcontainer container fc847cf60838c3feeb2f432a6417b644ef6a13220f7aef82166d114068051634. Jan 23 23:57:18.073347 containerd[2024]: time="2026-01-23T23:57:18.072879968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-ddg9z,Uid:7d79c384-4d50-4538-9d9a-312b65c47eb8,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2\"" Jan 23 23:57:18.080047 containerd[2024]: time="2026-01-23T23:57:18.079990292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:18.165574 containerd[2024]: time="2026-01-23T23:57:18.165072693Z" level=info msg="StartContainer for \"fc847cf60838c3feeb2f432a6417b644ef6a13220f7aef82166d114068051634\" returns successfully" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.125 [WARNING][5860] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46c86ab0-1223-4a22-bfcf-7f463abcf340", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b", Pod:"csi-node-driver-rn45p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2781ce6a36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.125 [INFO][5860] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.125 [INFO][5860] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" iface="eth0" netns="" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.125 [INFO][5860] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.125 [INFO][5860] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.202 [INFO][5889] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.203 [INFO][5889] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.203 [INFO][5889] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.218 [WARNING][5889] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.218 [INFO][5889] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.221 [INFO][5889] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.234344 containerd[2024]: 2026-01-23 23:57:18.228 [INFO][5860] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.235207 containerd[2024]: time="2026-01-23T23:57:18.234638025Z" level=info msg="TearDown network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" successfully" Jan 23 23:57:18.235207 containerd[2024]: time="2026-01-23T23:57:18.234798621Z" level=info msg="StopPodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" returns successfully" Jan 23 23:57:18.236705 containerd[2024]: time="2026-01-23T23:57:18.236647881Z" level=info msg="RemovePodSandbox for \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" Jan 23 23:57:18.236818 containerd[2024]: time="2026-01-23T23:57:18.236706897Z" level=info msg="Forcibly stopping sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\"" Jan 23 23:57:18.352390 containerd[2024]: time="2026-01-23T23:57:18.351804730Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:18.359282 containerd[2024]: time="2026-01-23T23:57:18.358019542Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:18.360825 containerd[2024]: time="2026-01-23T23:57:18.359804686Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:18.361025 kubelet[3241]: E0123 23:57:18.360060 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:18.361025 kubelet[3241]: E0123 23:57:18.360147 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:18.362706 kubelet[3241]: E0123 23:57:18.360564 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:18.363938 kubelet[3241]: E0123 23:57:18.363847 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:18.383773 kubelet[3241]: E0123 23:57:18.383545 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:18.432113 systemd-networkd[1938]: cali77444c78010: Link UP Jan 23 23:57:18.437539 systemd-networkd[1938]: cali77444c78010: Gained carrier Jan 23 23:57:18.527771 kubelet[3241]: I0123 23:57:18.527666 3241 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6s8v5" podStartSLOduration=58.52764073 podStartE2EDuration="58.52764073s" podCreationTimestamp="2026-01-23 23:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:57:18.521696602 +0000 UTC m=+62.984211805" watchObservedRunningTime="2026-01-23 23:57:18.52764073 +0000 UTC m=+62.990155933" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.124 [INFO][5833] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0 calico-apiserver-6976454ff7- calico-apiserver 74258f81-20b6-4c16-8e17-d994c72b6c19 1088 0 2026-01-23 23:56:33 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6976454ff7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-28-204 calico-apiserver-6976454ff7-t76qf eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali77444c78010 [] [] }} ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.125 [INFO][5833] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.202 [INFO][5894] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" HandleID="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.203 [INFO][5894] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" HandleID="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-28-204", "pod":"calico-apiserver-6976454ff7-t76qf", "timestamp":"2026-01-23 23:57:18.202865373 +0000 UTC"}, Hostname:"ip-172-31-28-204", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.203 [INFO][5894] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.221 [INFO][5894] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.222 [INFO][5894] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-28-204' Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.251 [INFO][5894] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.268 [INFO][5894] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.291 [INFO][5894] ipam/ipam.go 511: Trying affinity for 192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.306 [INFO][5894] ipam/ipam.go 158: Attempting to load block cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.317 [INFO][5894] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.318 [INFO][5894] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.322 [INFO][5894] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9 Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.342 [INFO][5894] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.388 [INFO][5894] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.112.136/26] block=192.168.112.128/26 handle="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.388 [INFO][5894] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.112.136/26] handle="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" host="ip-172-31-28-204" Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.388 [INFO][5894] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.532361 containerd[2024]: 2026-01-23 23:57:18.388 [INFO][5894] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.112.136/26] IPv6=[] ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" HandleID="k8s-pod-network.bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.403 [INFO][5833] cni-plugin/k8s.go 418: Populated endpoint ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"74258f81-20b6-4c16-8e17-d994c72b6c19", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"", Pod:"calico-apiserver-6976454ff7-t76qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77444c78010", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.404 [INFO][5833] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.112.136/32] ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.405 [INFO][5833] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali77444c78010 ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.442 [INFO][5833] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.454 [INFO][5833] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"74258f81-20b6-4c16-8e17-d994c72b6c19", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9", Pod:"calico-apiserver-6976454ff7-t76qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77444c78010", MAC:"ba:b5:1f:ee:59:ad", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.534798 containerd[2024]: 2026-01-23 23:57:18.515 [INFO][5833] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9" Namespace="calico-apiserver" Pod="calico-apiserver-6976454ff7-t76qf" WorkloadEndpoint="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.339 [WARNING][5918] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"46c86ab0-1223-4a22-bfcf-7f463abcf340", ResourceVersion:"1075", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"60d6dbba3d48f97413869e3bd2d8d20b0e35460e335f830ead7dca9596695e0b", Pod:"csi-node-driver-rn45p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calie2781ce6a36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.340 [INFO][5918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.340 [INFO][5918] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" iface="eth0" netns="" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.340 [INFO][5918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.340 [INFO][5918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.477 [INFO][5931] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.477 [INFO][5931] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.478 [INFO][5931] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.556 [WARNING][5931] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.557 [INFO][5931] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" HandleID="k8s-pod-network.eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Workload="ip--172--31--28--204-k8s-csi--node--driver--rn45p-eth0" Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.561 [INFO][5931] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.592880 containerd[2024]: 2026-01-23 23:57:18.576 [INFO][5918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487" Jan 23 23:57:18.593871 containerd[2024]: time="2026-01-23T23:57:18.592895927Z" level=info msg="TearDown network for sandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" successfully" Jan 23 23:57:18.612236 containerd[2024]: time="2026-01-23T23:57:18.611902079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:57:18.612623 containerd[2024]: time="2026-01-23T23:57:18.612128027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:57:18.615700 containerd[2024]: time="2026-01-23T23:57:18.612203315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:18.615700 containerd[2024]: time="2026-01-23T23:57:18.613670735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:57:18.620131 containerd[2024]: time="2026-01-23T23:57:18.617032031Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:18.620131 containerd[2024]: time="2026-01-23T23:57:18.617180219Z" level=info msg="RemovePodSandbox \"eadd2746d3704b2d9e8a0860184bcec52d0e1fcf20f1c0682291dafe4e13c487\" returns successfully" Jan 23 23:57:18.621290 containerd[2024]: time="2026-01-23T23:57:18.621237239Z" level=info msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" Jan 23 23:57:18.721809 systemd[1]: Started cri-containerd-bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9.scope - libcontainer container bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9. Jan 23 23:57:18.736870 systemd-networkd[1938]: cali5c3077ac8bf: Gained IPv6LL Jan 23 23:57:18.864034 systemd-networkd[1938]: cali4b2b2744c9c: Gained IPv6LL Jan 23 23:57:18.892810 containerd[2024]: time="2026-01-23T23:57:18.892661388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6976454ff7-t76qf,Uid:74258f81-20b6-4c16-8e17-d994c72b6c19,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9\"" Jan 23 23:57:18.899241 containerd[2024]: time="2026-01-23T23:57:18.897147984Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.828 [WARNING][5982] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.829 [INFO][5982] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.829 [INFO][5982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" iface="eth0" netns="" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.829 [INFO][5982] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.829 [INFO][5982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.922 [INFO][6006] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.922 [INFO][6006] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.923 [INFO][6006] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.945 [WARNING][6006] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.945 [INFO][6006] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.950 [INFO][6006] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:18.960923 containerd[2024]: 2026-01-23 23:57:18.954 [INFO][5982] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:18.962742 containerd[2024]: time="2026-01-23T23:57:18.961808605Z" level=info msg="TearDown network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" successfully" Jan 23 23:57:18.962742 containerd[2024]: time="2026-01-23T23:57:18.962482993Z" level=info msg="StopPodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" returns successfully" Jan 23 23:57:18.964400 containerd[2024]: time="2026-01-23T23:57:18.963892117Z" level=info msg="RemovePodSandbox for \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" Jan 23 23:57:18.964400 containerd[2024]: time="2026-01-23T23:57:18.963947449Z" level=info msg="Forcibly stopping sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\"" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.044 [WARNING][6026] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" WorkloadEndpoint="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.044 [INFO][6026] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.044 [INFO][6026] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" iface="eth0" netns="" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.045 [INFO][6026] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.045 [INFO][6026] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.120 [INFO][6033] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.120 [INFO][6033] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.120 [INFO][6033] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.167 [WARNING][6033] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.167 [INFO][6033] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" HandleID="k8s-pod-network.931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Workload="ip--172--31--28--204-k8s-whisker--84494b5d4d--lj25m-eth0" Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.173 [INFO][6033] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.183297 containerd[2024]: 2026-01-23 23:57:19.177 [INFO][6026] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1" Jan 23 23:57:19.185397 containerd[2024]: time="2026-01-23T23:57:19.184196878Z" level=info msg="TearDown network for sandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" successfully" Jan 23 23:57:19.193879 containerd[2024]: time="2026-01-23T23:57:19.193816510Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:19.194183 containerd[2024]: time="2026-01-23T23:57:19.194148610Z" level=info msg="RemovePodSandbox \"931c062470e0b934fd79b16adb5d4af52ece2a3ed0ac91352df4048cd56b75f1\" returns successfully" Jan 23 23:57:19.196776 containerd[2024]: time="2026-01-23T23:57:19.196714630Z" level=info msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" Jan 23 23:57:19.197907 containerd[2024]: time="2026-01-23T23:57:19.197841634Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:19.200691 containerd[2024]: time="2026-01-23T23:57:19.200519278Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:19.201197 containerd[2024]: time="2026-01-23T23:57:19.200628154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:19.204952 kubelet[3241]: E0123 23:57:19.202413 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:19.204952 kubelet[3241]: E0123 23:57:19.202498 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:19.204952 kubelet[3241]: E0123 23:57:19.204569 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6td6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:19.206353 kubelet[3241]: E0123 23:57:19.205992 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.295 [WARNING][6050] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40024e0b-dc12-464a-9bd9-6f315f803fe4", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126", Pod:"goldmane-666569f655-q6gtf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29b03597218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.296 [INFO][6050] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.296 [INFO][6050] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" iface="eth0" netns="" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.296 [INFO][6050] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.296 [INFO][6050] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.360 [INFO][6058] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.360 [INFO][6058] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.360 [INFO][6058] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.382 [WARNING][6058] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.382 [INFO][6058] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.385 [INFO][6058] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.392797 containerd[2024]: 2026-01-23 23:57:19.388 [INFO][6050] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.393686 containerd[2024]: time="2026-01-23T23:57:19.392759387Z" level=info msg="TearDown network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" successfully" Jan 23 23:57:19.393686 containerd[2024]: time="2026-01-23T23:57:19.393472403Z" level=info msg="StopPodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" returns successfully" Jan 23 23:57:19.394797 containerd[2024]: time="2026-01-23T23:57:19.394739207Z" level=info msg="RemovePodSandbox for \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" Jan 23 23:57:19.394949 containerd[2024]: time="2026-01-23T23:57:19.394826147Z" level=info msg="Forcibly stopping sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\"" Jan 23 23:57:19.426587 kubelet[3241]: E0123 23:57:19.425989 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:19.429531 kubelet[3241]: E0123 23:57:19.428124 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.587 [WARNING][6073] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"40024e0b-dc12-464a-9bd9-6f315f803fe4", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"727b6026c2e877744cb87e14a23600e04349b3ad367cb58228b7b78082961126", Pod:"goldmane-666569f655-q6gtf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali29b03597218", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.587 [INFO][6073] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.587 [INFO][6073] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" iface="eth0" netns="" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.587 [INFO][6073] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.587 [INFO][6073] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.657 [INFO][6082] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.657 [INFO][6082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.657 [INFO][6082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.674 [WARNING][6082] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.674 [INFO][6082] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" HandleID="k8s-pod-network.54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Workload="ip--172--31--28--204-k8s-goldmane--666569f655--q6gtf-eth0" Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.677 [INFO][6082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:19.684293 containerd[2024]: 2026-01-23 23:57:19.680 [INFO][6073] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c" Jan 23 23:57:19.685510 containerd[2024]: time="2026-01-23T23:57:19.685448616Z" level=info msg="TearDown network for sandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" successfully" Jan 23 23:57:19.691855 containerd[2024]: time="2026-01-23T23:57:19.691751940Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:19.691855 containerd[2024]: time="2026-01-23T23:57:19.691852380Z" level=info msg="RemovePodSandbox \"54819a1f9a1e83dcdf3538b3af73371d3d43ee22bc3f4a1bba1651e5836a940c\" returns successfully" Jan 23 23:57:20.079669 systemd-networkd[1938]: cali77444c78010: Gained IPv6LL Jan 23 23:57:20.442344 kubelet[3241]: E0123 23:57:20.441857 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:22.655905 systemd[1]: Started sshd@9-172.31.28.204:22-4.153.228.146:56386.service - OpenSSH per-connection server daemon (4.153.228.146:56386). Jan 23 23:57:22.694711 ntpd[2005]: Listen normally on 7 vxlan.calico 192.168.112.128:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 7 vxlan.calico 192.168.112.128:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 8 cali0668c356275 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 9 vxlan.calico [fe80::6423:4fff:fefb:92a0%5]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 10 calia0a10b5544a [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 11 calie2781ce6a36 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 12 calia3ec842652e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 13 cali29b03597218 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 14 cali5c3077ac8bf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 15 cali4b2b2744c9c [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:57:22.696088 ntpd[2005]: 23 Jan 23:57:22 ntpd[2005]: Listen normally on 16 cali77444c78010 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:57:22.694835 ntpd[2005]: Listen normally on 8 cali0668c356275 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:57:22.694918 ntpd[2005]: Listen normally on 9 vxlan.calico [fe80::6423:4fff:fefb:92a0%5]:123 Jan 23 23:57:22.694986 ntpd[2005]: Listen normally on 10 calia0a10b5544a [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:57:22.695053 ntpd[2005]: Listen normally on 11 calie2781ce6a36 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:57:22.695118 ntpd[2005]: Listen normally on 12 calia3ec842652e [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:57:22.695184 ntpd[2005]: Listen normally on 13 cali29b03597218 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:57:22.695251 ntpd[2005]: Listen normally on 14 cali5c3077ac8bf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:57:22.695357 ntpd[2005]: Listen normally on 15 cali4b2b2744c9c [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:57:22.695434 ntpd[2005]: Listen normally on 16 cali77444c78010 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:57:23.204772 sshd[6095]: Accepted publickey for core from 4.153.228.146 port 56386 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:23.240222 sshd[6095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:23.250574 systemd-logind[2010]: New session 10 of user core. Jan 23 23:57:23.256684 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:57:23.725772 sshd[6095]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:23.733818 systemd[1]: sshd@9-172.31.28.204:22-4.153.228.146:56386.service: Deactivated successfully. Jan 23 23:57:23.740964 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:57:23.743941 systemd-logind[2010]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:57:23.746962 systemd-logind[2010]: Removed session 10. Jan 23 23:57:23.815957 systemd[1]: Started sshd@10-172.31.28.204:22-4.153.228.146:56392.service - OpenSSH per-connection server daemon (4.153.228.146:56392). Jan 23 23:57:24.333427 sshd[6114]: Accepted publickey for core from 4.153.228.146 port 56392 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:24.336419 sshd[6114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:24.345445 systemd-logind[2010]: New session 11 of user core. Jan 23 23:57:24.351811 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:57:24.830931 containerd[2024]: time="2026-01-23T23:57:24.830855418Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:24.976879 sshd[6114]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:24.983459 systemd-logind[2010]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:57:24.984572 systemd[1]: sshd@10-172.31.28.204:22-4.153.228.146:56392.service: Deactivated successfully. Jan 23 23:57:24.990266 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:57:24.995407 systemd-logind[2010]: Removed session 11. Jan 23 23:57:25.066829 systemd[1]: Started sshd@11-172.31.28.204:22-4.153.228.146:50150.service - OpenSSH per-connection server daemon (4.153.228.146:50150). Jan 23 23:57:25.095253 containerd[2024]: time="2026-01-23T23:57:25.095026047Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:25.097887 containerd[2024]: time="2026-01-23T23:57:25.097575519Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:25.097887 containerd[2024]: time="2026-01-23T23:57:25.097727991Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:25.098606 kubelet[3241]: E0123 23:57:25.098053 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:25.098606 kubelet[3241]: E0123 23:57:25.098116 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:25.100809 kubelet[3241]: E0123 23:57:25.098289 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:671baf049109417185f3e6729fa67078,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:25.106389 containerd[2024]: time="2026-01-23T23:57:25.106029807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:25.393180 containerd[2024]: time="2026-01-23T23:57:25.392864921Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:25.393180 containerd[2024]: time="2026-01-23T23:57:25.395353049Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:25.393180 containerd[2024]: time="2026-01-23T23:57:25.395492153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:25.399802 kubelet[3241]: E0123 23:57:25.395711 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:25.399802 kubelet[3241]: E0123 23:57:25.395779 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:25.399802 kubelet[3241]: E0123 23:57:25.395956 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:25.399802 kubelet[3241]: E0123 23:57:25.397390 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:57:25.577801 sshd[6129]: Accepted publickey for core from 4.153.228.146 port 50150 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:25.580877 sshd[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:25.589519 systemd-logind[2010]: New session 12 of user core. Jan 23 23:57:25.599656 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:57:26.070241 sshd[6129]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:26.078049 systemd[1]: sshd@11-172.31.28.204:22-4.153.228.146:50150.service: Deactivated successfully. Jan 23 23:57:26.082741 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:57:26.085230 systemd-logind[2010]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:57:26.087786 systemd-logind[2010]: Removed session 12. Jan 23 23:57:26.828062 containerd[2024]: time="2026-01-23T23:57:26.827928620Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:27.089591 containerd[2024]: time="2026-01-23T23:57:27.089414609Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:27.091696 containerd[2024]: time="2026-01-23T23:57:27.091590773Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:27.092047 containerd[2024]: time="2026-01-23T23:57:27.091639793Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:27.092146 kubelet[3241]: E0123 23:57:27.091971 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:27.092146 kubelet[3241]: E0123 23:57:27.092037 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:27.092797 kubelet[3241]: E0123 23:57:27.092232 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxpxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:27.094258 kubelet[3241]: E0123 23:57:27.094180 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:29.832276 containerd[2024]: time="2026-01-23T23:57:29.831810863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:30.135128 containerd[2024]: time="2026-01-23T23:57:30.134899700Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:30.137566 containerd[2024]: time="2026-01-23T23:57:30.137393480Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:30.137566 containerd[2024]: time="2026-01-23T23:57:30.137474480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:30.138201 kubelet[3241]: E0123 23:57:30.137777 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:30.138201 kubelet[3241]: E0123 23:57:30.137851 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:30.138201 kubelet[3241]: E0123 23:57:30.138148 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:30.140876 containerd[2024]: time="2026-01-23T23:57:30.139824320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:30.392349 containerd[2024]: time="2026-01-23T23:57:30.392139669Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:30.394795 containerd[2024]: time="2026-01-23T23:57:30.394675797Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:30.394987 containerd[2024]: time="2026-01-23T23:57:30.394748553Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:30.395692 kubelet[3241]: E0123 23:57:30.395227 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:30.395692 kubelet[3241]: E0123 23:57:30.395304 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:30.396825 kubelet[3241]: E0123 23:57:30.396405 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:30.397608 containerd[2024]: time="2026-01-23T23:57:30.397535109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:30.398070 kubelet[3241]: E0123 23:57:30.397960 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:30.696757 containerd[2024]: time="2026-01-23T23:57:30.696542675Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:30.699863 containerd[2024]: time="2026-01-23T23:57:30.699711527Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:30.700394 kubelet[3241]: E0123 23:57:30.700298 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:30.700561 kubelet[3241]: E0123 23:57:30.700402 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:30.701127 kubelet[3241]: E0123 23:57:30.700598 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:30.703303 containerd[2024]: time="2026-01-23T23:57:30.699805667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:30.703963 kubelet[3241]: E0123 23:57:30.703868 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:31.181878 systemd[1]: Started sshd@12-172.31.28.204:22-4.153.228.146:50156.service - OpenSSH per-connection server daemon (4.153.228.146:50156). Jan 23 23:57:31.723162 sshd[6152]: Accepted publickey for core from 4.153.228.146 port 50156 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:31.726368 sshd[6152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:31.735658 systemd-logind[2010]: New session 13 of user core. Jan 23 23:57:31.742642 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:57:31.828603 containerd[2024]: time="2026-01-23T23:57:31.828516612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:32.131342 containerd[2024]: time="2026-01-23T23:57:32.131071174Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:32.133698 containerd[2024]: time="2026-01-23T23:57:32.133475182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:32.133698 containerd[2024]: time="2026-01-23T23:57:32.133543738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:32.133973 kubelet[3241]: E0123 23:57:32.133852 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:32.133973 kubelet[3241]: E0123 23:57:32.133919 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:32.134687 kubelet[3241]: E0123 23:57:32.134135 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6td6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:32.136202 kubelet[3241]: E0123 23:57:32.136118 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:32.260966 sshd[6152]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:32.268983 systemd[1]: sshd@12-172.31.28.204:22-4.153.228.146:50156.service: Deactivated successfully. Jan 23 23:57:32.274443 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:57:32.278242 systemd-logind[2010]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:57:32.281017 systemd-logind[2010]: Removed session 13. Jan 23 23:57:32.828679 containerd[2024]: time="2026-01-23T23:57:32.828609589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:33.082930 containerd[2024]: time="2026-01-23T23:57:33.082525907Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:33.084786 containerd[2024]: time="2026-01-23T23:57:33.084629567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:33.084786 containerd[2024]: time="2026-01-23T23:57:33.084733847Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:33.086208 kubelet[3241]: E0123 23:57:33.084955 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:33.086208 kubelet[3241]: E0123 23:57:33.085288 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:33.086208 kubelet[3241]: E0123 23:57:33.085554 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47fj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:33.086946 kubelet[3241]: E0123 23:57:33.086876 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:36.831232 kubelet[3241]: E0123 23:57:36.831057 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:57:37.362893 systemd[1]: Started sshd@13-172.31.28.204:22-4.153.228.146:38438.service - OpenSSH per-connection server daemon (4.153.228.146:38438). Jan 23 23:57:37.908802 sshd[6171]: Accepted publickey for core from 4.153.228.146 port 38438 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:37.911769 sshd[6171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:37.922479 systemd-logind[2010]: New session 14 of user core. Jan 23 23:57:37.934075 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:57:38.429818 sshd[6171]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:38.436782 systemd[1]: sshd@13-172.31.28.204:22-4.153.228.146:38438.service: Deactivated successfully. Jan 23 23:57:38.441132 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:57:38.443687 systemd-logind[2010]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:57:38.446510 systemd-logind[2010]: Removed session 14. Jan 23 23:57:38.829910 kubelet[3241]: E0123 23:57:38.829560 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:43.520843 systemd[1]: Started sshd@14-172.31.28.204:22-4.153.228.146:38446.service - OpenSSH per-connection server daemon (4.153.228.146:38446). Jan 23 23:57:44.030890 sshd[6209]: Accepted publickey for core from 4.153.228.146 port 38446 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:44.034119 sshd[6209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:44.043863 systemd-logind[2010]: New session 15 of user core. Jan 23 23:57:44.048632 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:57:44.521536 sshd[6209]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:44.529587 systemd[1]: sshd@14-172.31.28.204:22-4.153.228.146:38446.service: Deactivated successfully. Jan 23 23:57:44.534461 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:57:44.536410 systemd-logind[2010]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:57:44.539550 systemd-logind[2010]: Removed session 15. Jan 23 23:57:44.830579 kubelet[3241]: E0123 23:57:44.829392 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:44.832689 kubelet[3241]: E0123 23:57:44.831618 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:57:45.831933 kubelet[3241]: E0123 23:57:45.831842 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:57:45.834134 kubelet[3241]: E0123 23:57:45.833819 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:49.628954 systemd[1]: Started sshd@15-172.31.28.204:22-4.153.228.146:56936.service - OpenSSH per-connection server daemon (4.153.228.146:56936). Jan 23 23:57:50.158389 sshd[6223]: Accepted publickey for core from 4.153.228.146 port 56936 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:50.162974 sshd[6223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:50.176154 systemd-logind[2010]: New session 16 of user core. Jan 23 23:57:50.184962 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:57:50.754384 sshd[6223]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:50.762807 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:57:50.765197 systemd[1]: sshd@15-172.31.28.204:22-4.153.228.146:56936.service: Deactivated successfully. Jan 23 23:57:50.765291 systemd-logind[2010]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:57:50.782170 systemd-logind[2010]: Removed session 16. Jan 23 23:57:50.832053 containerd[2024]: time="2026-01-23T23:57:50.831981259Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:50.852918 systemd[1]: Started sshd@16-172.31.28.204:22-4.153.228.146:56940.service - OpenSSH per-connection server daemon (4.153.228.146:56940). Jan 23 23:57:51.140053 containerd[2024]: time="2026-01-23T23:57:51.139952188Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:51.142365 containerd[2024]: time="2026-01-23T23:57:51.142219492Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:51.142544 containerd[2024]: time="2026-01-23T23:57:51.142343656Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:51.142850 kubelet[3241]: E0123 23:57:51.142720 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:51.144866 kubelet[3241]: E0123 23:57:51.142854 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:51.144866 kubelet[3241]: E0123 23:57:51.143050 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:671baf049109417185f3e6729fa67078,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:51.147053 containerd[2024]: time="2026-01-23T23:57:51.146684572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:51.385074 sshd[6235]: Accepted publickey for core from 4.153.228.146 port 56940 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:51.389091 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:51.406257 systemd-logind[2010]: New session 17 of user core. Jan 23 23:57:51.412668 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:57:51.414923 containerd[2024]: time="2026-01-23T23:57:51.414205914Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:51.416797 containerd[2024]: time="2026-01-23T23:57:51.416607162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:51.416797 containerd[2024]: time="2026-01-23T23:57:51.416783946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:51.420877 kubelet[3241]: E0123 23:57:51.418131 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:51.420877 kubelet[3241]: E0123 23:57:51.418229 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:51.420877 kubelet[3241]: E0123 23:57:51.418463 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:51.420877 kubelet[3241]: E0123 23:57:51.420755 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:57:51.851428 containerd[2024]: time="2026-01-23T23:57:51.851237204Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:52.136451 containerd[2024]: time="2026-01-23T23:57:52.136369373Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:52.138763 containerd[2024]: time="2026-01-23T23:57:52.138651545Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:52.138961 containerd[2024]: time="2026-01-23T23:57:52.138830333Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:52.141462 kubelet[3241]: E0123 23:57:52.140526 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:52.141462 kubelet[3241]: E0123 23:57:52.140609 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:52.141462 kubelet[3241]: E0123 23:57:52.140809 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxpxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:52.142171 kubelet[3241]: E0123 23:57:52.142087 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:57:52.274594 sshd[6235]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:52.285993 systemd[1]: sshd@16-172.31.28.204:22-4.153.228.146:56940.service: Deactivated successfully. Jan 23 23:57:52.293274 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:57:52.297635 systemd-logind[2010]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:57:52.301688 systemd-logind[2010]: Removed session 17. Jan 23 23:57:52.380598 systemd[1]: Started sshd@17-172.31.28.204:22-4.153.228.146:56952.service - OpenSSH per-connection server daemon (4.153.228.146:56952). Jan 23 23:57:52.942618 sshd[6248]: Accepted publickey for core from 4.153.228.146 port 56952 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:52.944570 sshd[6248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:52.961482 systemd-logind[2010]: New session 18 of user core. Jan 23 23:57:52.965684 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:57:54.355260 sshd[6248]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:54.361804 systemd[1]: sshd@17-172.31.28.204:22-4.153.228.146:56952.service: Deactivated successfully. Jan 23 23:57:54.368379 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:57:54.372772 systemd-logind[2010]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:57:54.375363 systemd-logind[2010]: Removed session 18. Jan 23 23:57:54.462944 systemd[1]: Started sshd@18-172.31.28.204:22-4.153.228.146:56964.service - OpenSSH per-connection server daemon (4.153.228.146:56964). Jan 23 23:57:55.002562 sshd[6275]: Accepted publickey for core from 4.153.228.146 port 56964 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:55.005675 sshd[6275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:55.020079 systemd-logind[2010]: New session 19 of user core. Jan 23 23:57:55.026674 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:57:55.777701 sshd[6275]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:55.786694 systemd[1]: sshd@18-172.31.28.204:22-4.153.228.146:56964.service: Deactivated successfully. Jan 23 23:57:55.793732 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:57:55.795181 systemd-logind[2010]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:57:55.799923 systemd-logind[2010]: Removed session 19. Jan 23 23:57:55.834469 containerd[2024]: time="2026-01-23T23:57:55.833458368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:55.873965 systemd[1]: Started sshd@19-172.31.28.204:22-4.153.228.146:49148.service - OpenSSH per-connection server daemon (4.153.228.146:49148). Jan 23 23:57:56.103358 containerd[2024]: time="2026-01-23T23:57:56.102998769Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:56.105351 containerd[2024]: time="2026-01-23T23:57:56.105173745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:56.105351 containerd[2024]: time="2026-01-23T23:57:56.105241521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:56.105762 kubelet[3241]: E0123 23:57:56.105637 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:56.106500 kubelet[3241]: E0123 23:57:56.105844 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:56.106500 kubelet[3241]: E0123 23:57:56.106217 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:56.108176 kubelet[3241]: E0123 23:57:56.108032 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:57:56.380252 sshd[6285]: Accepted publickey for core from 4.153.228.146 port 49148 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:56.382273 sshd[6285]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:56.392900 systemd-logind[2010]: New session 20 of user core. Jan 23 23:57:56.398654 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:57:56.831163 containerd[2024]: time="2026-01-23T23:57:56.830923573Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:56.854361 sshd[6285]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:56.862250 systemd[1]: sshd@19-172.31.28.204:22-4.153.228.146:49148.service: Deactivated successfully. Jan 23 23:57:56.872630 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:57:56.875535 systemd-logind[2010]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:57:56.879398 systemd-logind[2010]: Removed session 20. Jan 23 23:57:57.097864 containerd[2024]: time="2026-01-23T23:57:57.097658950Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:57.100481 containerd[2024]: time="2026-01-23T23:57:57.100360774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:57.100664 containerd[2024]: time="2026-01-23T23:57:57.100377058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:57.101471 kubelet[3241]: E0123 23:57:57.100907 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:57.101471 kubelet[3241]: E0123 23:57:57.100974 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:57.103009 kubelet[3241]: E0123 23:57:57.101156 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6td6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:57.105360 kubelet[3241]: E0123 23:57:57.104131 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:57:57.831873 containerd[2024]: time="2026-01-23T23:57:57.831029474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:58.105512 containerd[2024]: time="2026-01-23T23:57:58.105146411Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:58.107680 containerd[2024]: time="2026-01-23T23:57:58.107497439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:58.107680 containerd[2024]: time="2026-01-23T23:57:58.107539739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:58.108974 kubelet[3241]: E0123 23:57:58.108109 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:58.108974 kubelet[3241]: E0123 23:57:58.108181 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:58.108974 kubelet[3241]: E0123 23:57:58.108390 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:58.111892 containerd[2024]: time="2026-01-23T23:57:58.111835979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:58.364761 containerd[2024]: time="2026-01-23T23:57:58.364569276Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:58.367627 containerd[2024]: time="2026-01-23T23:57:58.367468260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:58.367627 containerd[2024]: time="2026-01-23T23:57:58.367575168Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:58.368252 kubelet[3241]: E0123 23:57:58.367864 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:58.368252 kubelet[3241]: E0123 23:57:58.367936 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:58.368252 kubelet[3241]: E0123 23:57:58.368126 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:58.369648 kubelet[3241]: E0123 23:57:58.369546 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:58:00.830870 containerd[2024]: time="2026-01-23T23:58:00.830810297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:01.101257 containerd[2024]: time="2026-01-23T23:58:01.100839110Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:01.103259 containerd[2024]: time="2026-01-23T23:58:01.103091162Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:01.103259 containerd[2024]: time="2026-01-23T23:58:01.103194866Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:01.103584 kubelet[3241]: E0123 23:58:01.103525 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:01.104132 kubelet[3241]: E0123 23:58:01.103592 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:01.104530 kubelet[3241]: E0123 23:58:01.104004 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47fj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:01.105999 kubelet[3241]: E0123 23:58:01.105902 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:58:01.956835 systemd[1]: Started sshd@20-172.31.28.204:22-4.153.228.146:49158.service - OpenSSH per-connection server daemon (4.153.228.146:49158). Jan 23 23:58:02.500104 sshd[6302]: Accepted publickey for core from 4.153.228.146 port 49158 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:02.503050 sshd[6302]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:02.511697 systemd-logind[2010]: New session 21 of user core. Jan 23 23:58:02.519631 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:58:02.999451 sshd[6302]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:03.011026 systemd-logind[2010]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:58:03.012066 systemd[1]: sshd@20-172.31.28.204:22-4.153.228.146:49158.service: Deactivated successfully. Jan 23 23:58:03.018869 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:58:03.024091 systemd-logind[2010]: Removed session 21. Jan 23 23:58:03.839604 kubelet[3241]: E0123 23:58:03.839521 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:58:05.834879 kubelet[3241]: E0123 23:58:05.833774 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:58:08.101651 systemd[1]: Started sshd@21-172.31.28.204:22-4.153.228.146:34426.service - OpenSSH per-connection server daemon (4.153.228.146:34426). Jan 23 23:58:08.663785 sshd[6314]: Accepted publickey for core from 4.153.228.146 port 34426 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:08.667953 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:08.686434 systemd-logind[2010]: New session 22 of user core. Jan 23 23:58:08.691673 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:58:08.830594 kubelet[3241]: E0123 23:58:08.830522 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:58:08.834164 kubelet[3241]: E0123 23:58:08.834062 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:58:09.279853 sshd[6314]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:09.291483 systemd[1]: sshd@21-172.31.28.204:22-4.153.228.146:34426.service: Deactivated successfully. Jan 23 23:58:09.300884 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:58:09.303257 systemd-logind[2010]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:58:09.307665 systemd-logind[2010]: Removed session 22. Jan 23 23:58:10.829186 kubelet[3241]: E0123 23:58:10.828935 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:58:13.835399 kubelet[3241]: E0123 23:58:13.834722 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:58:14.367021 systemd[1]: Started sshd@22-172.31.28.204:22-4.153.228.146:34432.service - OpenSSH per-connection server daemon (4.153.228.146:34432). Jan 23 23:58:14.886206 sshd[6348]: Accepted publickey for core from 4.153.228.146 port 34432 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:14.889505 sshd[6348]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:14.902727 systemd-logind[2010]: New session 23 of user core. Jan 23 23:58:14.909651 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:58:15.418742 sshd[6348]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:15.428072 systemd[1]: sshd@22-172.31.28.204:22-4.153.228.146:34432.service: Deactivated successfully. Jan 23 23:58:15.436982 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:58:15.444077 systemd-logind[2010]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:58:15.448704 systemd-logind[2010]: Removed session 23. Jan 23 23:58:18.830875 kubelet[3241]: E0123 23:58:18.830794 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:58:19.697880 containerd[2024]: time="2026-01-23T23:58:19.697797646Z" level=info msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" Jan 23 23:58:19.851569 kubelet[3241]: E0123 23:58:19.850011 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.854 [WARNING][6372] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f58d96e-4844-4626-a760-be9823990f64", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd", Pod:"coredns-674b8bbfcf-6s8v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c3077ac8bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.857 [INFO][6372] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.857 [INFO][6372] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" iface="eth0" netns="" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.857 [INFO][6372] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.857 [INFO][6372] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.930 [INFO][6380] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.930 [INFO][6380] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.931 [INFO][6380] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.953 [WARNING][6380] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.953 [INFO][6380] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.956 [INFO][6380] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:19.964205 containerd[2024]: 2026-01-23 23:58:19.959 [INFO][6372] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:19.964205 containerd[2024]: time="2026-01-23T23:58:19.963549912Z" level=info msg="TearDown network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" successfully" Jan 23 23:58:19.964205 containerd[2024]: time="2026-01-23T23:58:19.963591852Z" level=info msg="StopPodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" returns successfully" Jan 23 23:58:19.966832 containerd[2024]: time="2026-01-23T23:58:19.964876932Z" level=info msg="RemovePodSandbox for \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" Jan 23 23:58:19.966832 containerd[2024]: time="2026-01-23T23:58:19.964933356Z" level=info msg="Forcibly stopping sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\"" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.054 [WARNING][6395] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9f58d96e-4844-4626-a760-be9823990f64", ResourceVersion:"1144", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bd0976db62eb25e791edfc9038d7a08d70331f61a2404a91e1bee856533b93bd", Pod:"coredns-674b8bbfcf-6s8v5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5c3077ac8bf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.055 [INFO][6395] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.055 [INFO][6395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" iface="eth0" netns="" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.055 [INFO][6395] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.055 [INFO][6395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.104 [INFO][6403] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.104 [INFO][6403] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.105 [INFO][6403] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.123 [WARNING][6403] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.123 [INFO][6403] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" HandleID="k8s-pod-network.978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Workload="ip--172--31--28--204-k8s-coredns--674b8bbfcf--6s8v5-eth0" Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.126 [INFO][6403] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:20.136069 containerd[2024]: 2026-01-23 23:58:20.129 [INFO][6395] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48" Jan 23 23:58:20.136069 containerd[2024]: time="2026-01-23T23:58:20.134748572Z" level=info msg="TearDown network for sandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" successfully" Jan 23 23:58:20.146378 containerd[2024]: time="2026-01-23T23:58:20.144590396Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:20.146378 containerd[2024]: time="2026-01-23T23:58:20.144691904Z" level=info msg="RemovePodSandbox \"978a620d6f51a96de616bf09996d980ca7ffd4690bb0516d18551e99a6b05f48\" returns successfully" Jan 23 23:58:20.146378 containerd[2024]: time="2026-01-23T23:58:20.145934732Z" level=info msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.318 [WARNING][6418] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d79c384-4d50-4538-9d9a-312b65c47eb8", ResourceVersion:"1470", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2", Pod:"calico-apiserver-6976454ff7-ddg9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b2b2744c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.318 [INFO][6418] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.319 [INFO][6418] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" iface="eth0" netns="" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.319 [INFO][6418] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.319 [INFO][6418] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.365 [INFO][6425] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.365 [INFO][6425] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.366 [INFO][6425] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.383 [WARNING][6425] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.385 [INFO][6425] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.388 [INFO][6425] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:20.401087 containerd[2024]: 2026-01-23 23:58:20.394 [INFO][6418] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.402989 containerd[2024]: time="2026-01-23T23:58:20.401141662Z" level=info msg="TearDown network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" successfully" Jan 23 23:58:20.402989 containerd[2024]: time="2026-01-23T23:58:20.401183830Z" level=info msg="StopPodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" returns successfully" Jan 23 23:58:20.403888 containerd[2024]: time="2026-01-23T23:58:20.403474678Z" level=info msg="RemovePodSandbox for \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" Jan 23 23:58:20.403888 containerd[2024]: time="2026-01-23T23:58:20.403602118Z" level=info msg="Forcibly stopping sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\"" Jan 23 23:58:20.538865 systemd[1]: Started sshd@23-172.31.28.204:22-4.153.228.146:49338.service - OpenSSH per-connection server daemon (4.153.228.146:49338). Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.490 [WARNING][6440] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"7d79c384-4d50-4538-9d9a-312b65c47eb8", ResourceVersion:"1470", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"efaec20872decdc6a9afa39af1714a2887ce53301c7d7b59dc7d5aa450d8feb2", Pod:"calico-apiserver-6976454ff7-ddg9z", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4b2b2744c9c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.491 [INFO][6440] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.491 [INFO][6440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" iface="eth0" netns="" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.491 [INFO][6440] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.491 [INFO][6440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.567 [INFO][6447] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.568 [INFO][6447] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.569 [INFO][6447] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.592 [WARNING][6447] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.593 [INFO][6447] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" HandleID="k8s-pod-network.16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--ddg9z-eth0" Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.600 [INFO][6447] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:20.613238 containerd[2024]: 2026-01-23 23:58:20.609 [INFO][6440] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce" Jan 23 23:58:20.615045 containerd[2024]: time="2026-01-23T23:58:20.614241635Z" level=info msg="TearDown network for sandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" successfully" Jan 23 23:58:20.623282 containerd[2024]: time="2026-01-23T23:58:20.622939283Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:20.623282 containerd[2024]: time="2026-01-23T23:58:20.623110259Z" level=info msg="RemovePodSandbox \"16c7832231e5ad4027ed084286d76cba6d15973e7bf640e986962225113d43ce\" returns successfully" Jan 23 23:58:20.624184 containerd[2024]: time="2026-01-23T23:58:20.624132071Z" level=info msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" Jan 23 23:58:20.833257 kubelet[3241]: E0123 23:58:20.833048 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.737 [WARNING][6464] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"74258f81-20b6-4c16-8e17-d994c72b6c19", ResourceVersion:"1486", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9", Pod:"calico-apiserver-6976454ff7-t76qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77444c78010", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.737 [INFO][6464] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.738 [INFO][6464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" iface="eth0" netns="" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.738 [INFO][6464] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.738 [INFO][6464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.792 [INFO][6472] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.792 [INFO][6472] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.792 [INFO][6472] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.826 [WARNING][6472] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.826 [INFO][6472] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.840 [INFO][6472] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:20.849684 containerd[2024]: 2026-01-23 23:58:20.845 [INFO][6464] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:20.855898 containerd[2024]: time="2026-01-23T23:58:20.851507832Z" level=info msg="TearDown network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" successfully" Jan 23 23:58:20.855898 containerd[2024]: time="2026-01-23T23:58:20.851592828Z" level=info msg="StopPodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" returns successfully" Jan 23 23:58:20.855898 containerd[2024]: time="2026-01-23T23:58:20.852702672Z" level=info msg="RemovePodSandbox for \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" Jan 23 23:58:20.855898 containerd[2024]: time="2026-01-23T23:58:20.852757980Z" level=info msg="Forcibly stopping sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\"" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:20.981 [WARNING][6486] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0", GenerateName:"calico-apiserver-6976454ff7-", Namespace:"calico-apiserver", SelfLink:"", UID:"74258f81-20b6-4c16-8e17-d994c72b6c19", ResourceVersion:"1486", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6976454ff7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-28-204", ContainerID:"bea0fd9421f8a3b3cd60120b983252d99bd195a42f3623a083dbc719474f94c9", Pod:"calico-apiserver-6976454ff7-t76qf", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali77444c78010", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:20.981 [INFO][6486] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:20.981 [INFO][6486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" iface="eth0" netns="" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:20.981 [INFO][6486] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:20.982 [INFO][6486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.063 [INFO][6493] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.064 [INFO][6493] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.065 [INFO][6493] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.086 [WARNING][6493] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.087 [INFO][6493] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" HandleID="k8s-pod-network.cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Workload="ip--172--31--28--204-k8s-calico--apiserver--6976454ff7--t76qf-eth0" Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.090 [INFO][6493] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:58:21.099077 containerd[2024]: 2026-01-23 23:58:21.094 [INFO][6486] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b" Jan 23 23:58:21.099077 containerd[2024]: time="2026-01-23T23:58:21.099037197Z" level=info msg="TearDown network for sandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" successfully" Jan 23 23:58:21.109839 containerd[2024]: time="2026-01-23T23:58:21.109741569Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:58:21.109839 containerd[2024]: time="2026-01-23T23:58:21.109838181Z" level=info msg="RemovePodSandbox \"cdba390e91ec2e415eb8c9eed2c9e1150f65924d1e17320763d2e8c9782b7a5b\" returns successfully" Jan 23 23:58:21.135389 sshd[6452]: Accepted publickey for core from 4.153.228.146 port 49338 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:21.139060 sshd[6452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:21.153466 systemd-logind[2010]: New session 24 of user core. Jan 23 23:58:21.163649 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:58:21.697231 sshd[6452]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:21.708263 systemd[1]: sshd@23-172.31.28.204:22-4.153.228.146:49338.service: Deactivated successfully. Jan 23 23:58:21.715707 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:58:21.718266 systemd-logind[2010]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:58:21.722486 systemd-logind[2010]: Removed session 24. Jan 23 23:58:21.835623 kubelet[3241]: E0123 23:58:21.835486 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:58:22.831648 kubelet[3241]: E0123 23:58:22.831525 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:58:25.833536 kubelet[3241]: E0123 23:58:25.833453 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:58:26.813507 systemd[1]: Started sshd@24-172.31.28.204:22-4.153.228.146:36164.service - OpenSSH per-connection server daemon (4.153.228.146:36164). Jan 23 23:58:27.374358 sshd[6512]: Accepted publickey for core from 4.153.228.146 port 36164 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:27.379701 sshd[6512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:27.389341 systemd-logind[2010]: New session 25 of user core. Jan 23 23:58:27.401554 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:27.958101 sshd[6512]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:27.964650 systemd-logind[2010]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:58:27.971093 systemd[1]: sshd@24-172.31.28.204:22-4.153.228.146:36164.service: Deactivated successfully. Jan 23 23:58:27.978218 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:58:27.986433 systemd-logind[2010]: Removed session 25. Jan 23 23:58:31.828964 kubelet[3241]: E0123 23:58:31.828683 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:58:33.831021 kubelet[3241]: E0123 23:58:33.830918 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:58:33.832681 containerd[2024]: time="2026-01-23T23:58:33.832170528Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:34.102794 containerd[2024]: time="2026-01-23T23:58:34.102243934Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:34.104673 containerd[2024]: time="2026-01-23T23:58:34.104581078Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:34.105062 containerd[2024]: time="2026-01-23T23:58:34.104771302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:34.105173 kubelet[3241]: E0123 23:58:34.104997 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:34.105173 kubelet[3241]: E0123 23:58:34.105070 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:34.105449 kubelet[3241]: E0123 23:58:34.105246 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:671baf049109417185f3e6729fa67078,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:34.108102 containerd[2024]: time="2026-01-23T23:58:34.108043522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:34.369197 containerd[2024]: time="2026-01-23T23:58:34.369110231Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:34.371483 containerd[2024]: time="2026-01-23T23:58:34.371380271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:34.371837 containerd[2024]: time="2026-01-23T23:58:34.371541059Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:34.371927 kubelet[3241]: E0123 23:58:34.371759 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:34.371927 kubelet[3241]: E0123 23:58:34.371823 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:34.372072 kubelet[3241]: E0123 23:58:34.372003 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z7jq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-6d6bcfdb8b-dvgwk_calico-system(e0693b48-91c3-4d6b-a757-c65fc3ee493a): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:34.373394 kubelet[3241]: E0123 23:58:34.373270 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:58:34.828676 kubelet[3241]: E0123 23:58:34.828354 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:58:34.829579 kubelet[3241]: E0123 23:58:34.828883 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:58:40.827787 kubelet[3241]: E0123 23:58:40.827709 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:58:43.345727 systemd[1]: cri-containerd-fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0.scope: Deactivated successfully. Jan 23 23:58:43.346204 systemd[1]: cri-containerd-fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0.scope: Consumed 27.114s CPU time. Jan 23 23:58:43.398160 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0-rootfs.mount: Deactivated successfully. Jan 23 23:58:43.418698 containerd[2024]: time="2026-01-23T23:58:43.418346636Z" level=info msg="shim disconnected" id=fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0 namespace=k8s.io Jan 23 23:58:43.418698 containerd[2024]: time="2026-01-23T23:58:43.418446284Z" level=warning msg="cleaning up after shim disconnected" id=fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0 namespace=k8s.io Jan 23 23:58:43.418698 containerd[2024]: time="2026-01-23T23:58:43.418466864Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:43.424226 systemd[1]: cri-containerd-1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb.scope: Deactivated successfully. Jan 23 23:58:43.424850 systemd[1]: cri-containerd-1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb.scope: Consumed 5.558s CPU time, 18.2M memory peak, 0B memory swap peak. Jan 23 23:58:43.483851 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb-rootfs.mount: Deactivated successfully. Jan 23 23:58:43.489297 containerd[2024]: time="2026-01-23T23:58:43.489223460Z" level=info msg="shim disconnected" id=1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb namespace=k8s.io Jan 23 23:58:43.489792 containerd[2024]: time="2026-01-23T23:58:43.489544196Z" level=warning msg="cleaning up after shim disconnected" id=1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb namespace=k8s.io Jan 23 23:58:43.489792 containerd[2024]: time="2026-01-23T23:58:43.489573896Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:43.510504 containerd[2024]: time="2026-01-23T23:58:43.510255825Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:58:43Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:58:43.733865 kubelet[3241]: I0123 23:58:43.733697 3241 scope.go:117] "RemoveContainer" containerID="1a01031adf4d8cb713e7912990f9558808ddaacbd8cd691ff153b8ea0e85b5bb" Jan 23 23:58:43.739808 kubelet[3241]: I0123 23:58:43.739756 3241 scope.go:117] "RemoveContainer" containerID="fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0" Jan 23 23:58:43.740434 containerd[2024]: time="2026-01-23T23:58:43.740379346Z" level=info msg="CreateContainer within sandbox \"b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:58:43.744374 containerd[2024]: time="2026-01-23T23:58:43.744249310Z" level=info msg="CreateContainer within sandbox \"1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 23:58:43.772628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount942178861.mount: Deactivated successfully. Jan 23 23:58:43.781170 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount581361326.mount: Deactivated successfully. Jan 23 23:58:43.786388 containerd[2024]: time="2026-01-23T23:58:43.784015510Z" level=info msg="CreateContainer within sandbox \"1592d848ecde5a96e3daad2baf90fa9e6c0e3d95e664cb8c4d5733e24f7f5368\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b\"" Jan 23 23:58:43.788417 containerd[2024]: time="2026-01-23T23:58:43.786753754Z" level=info msg="StartContainer for \"84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b\"" Jan 23 23:58:43.797263 containerd[2024]: time="2026-01-23T23:58:43.797194798Z" level=info msg="CreateContainer within sandbox \"b3e721414d066c8c1e02a57a3d0a098814d5cbeecf58a0fa2cba6aa81f83308f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a1e5b6fc7e04fe70bc4c6f99848da21b57ec0ac338fd3b667b98d39f6793b57e\"" Jan 23 23:58:43.799430 containerd[2024]: time="2026-01-23T23:58:43.798461446Z" level=info msg="StartContainer for \"a1e5b6fc7e04fe70bc4c6f99848da21b57ec0ac338fd3b667b98d39f6793b57e\"" Jan 23 23:58:43.845668 systemd[1]: Started cri-containerd-84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b.scope - libcontainer container 84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b. Jan 23 23:58:43.877687 systemd[1]: Started cri-containerd-a1e5b6fc7e04fe70bc4c6f99848da21b57ec0ac338fd3b667b98d39f6793b57e.scope - libcontainer container a1e5b6fc7e04fe70bc4c6f99848da21b57ec0ac338fd3b667b98d39f6793b57e. Jan 23 23:58:43.953637 containerd[2024]: time="2026-01-23T23:58:43.953180003Z" level=info msg="StartContainer for \"84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b\" returns successfully" Jan 23 23:58:43.971912 containerd[2024]: time="2026-01-23T23:58:43.971841839Z" level=info msg="StartContainer for \"a1e5b6fc7e04fe70bc4c6f99848da21b57ec0ac338fd3b667b98d39f6793b57e\" returns successfully" Jan 23 23:58:45.832688 kubelet[3241]: E0123 23:58:45.832606 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:58:46.827951 containerd[2024]: time="2026-01-23T23:58:46.827898565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:47.078167 containerd[2024]: time="2026-01-23T23:58:47.077973010Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:47.080295 containerd[2024]: time="2026-01-23T23:58:47.080184070Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:47.080454 containerd[2024]: time="2026-01-23T23:58:47.080337262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:47.080690 kubelet[3241]: E0123 23:58:47.080632 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:47.081206 kubelet[3241]: E0123 23:58:47.080704 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:47.081206 kubelet[3241]: E0123 23:58:47.080903 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rxpxk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74cfd6877d-hr9jw_calico-system(64d067e9-db06-43a4-8ec2-5418bd9de44b): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:47.082249 kubelet[3241]: E0123 23:58:47.082122 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:58:47.829056 containerd[2024]: time="2026-01-23T23:58:47.828635102Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:48.090641 containerd[2024]: time="2026-01-23T23:58:48.090449795Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:48.092848 containerd[2024]: time="2026-01-23T23:58:48.092770211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:48.093018 containerd[2024]: time="2026-01-23T23:58:48.092915135Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:48.093185 kubelet[3241]: E0123 23:58:48.093127 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:48.093750 kubelet[3241]: E0123 23:58:48.093199 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:48.093750 kubelet[3241]: E0123 23:58:48.093398 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:48.096406 containerd[2024]: time="2026-01-23T23:58:48.096281171Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:48.390961 containerd[2024]: time="2026-01-23T23:58:48.390877573Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:48.393102 containerd[2024]: time="2026-01-23T23:58:48.393040753Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:48.393228 containerd[2024]: time="2026-01-23T23:58:48.393194893Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:48.393487 kubelet[3241]: E0123 23:58:48.393411 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:48.393608 kubelet[3241]: E0123 23:58:48.393495 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:48.393818 kubelet[3241]: E0123 23:58:48.393685 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pvhxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-rn45p_calico-system(46c86ab0-1223-4a22-bfcf-7f463abcf340): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:48.395192 kubelet[3241]: E0123 23:58:48.395103 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:58:48.641616 systemd[1]: cri-containerd-70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed.scope: Deactivated successfully. Jan 23 23:58:48.642855 systemd[1]: cri-containerd-70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed.scope: Consumed 4.377s CPU time, 13.4M memory peak, 0B memory swap peak. Jan 23 23:58:48.690968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed-rootfs.mount: Deactivated successfully. Jan 23 23:58:48.707661 containerd[2024]: time="2026-01-23T23:58:48.707213738Z" level=info msg="shim disconnected" id=70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed namespace=k8s.io Jan 23 23:58:48.707661 containerd[2024]: time="2026-01-23T23:58:48.707301938Z" level=warning msg="cleaning up after shim disconnected" id=70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed namespace=k8s.io Jan 23 23:58:48.707661 containerd[2024]: time="2026-01-23T23:58:48.707366354Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:48.763216 kubelet[3241]: I0123 23:58:48.763135 3241 scope.go:117] "RemoveContainer" containerID="70f6171b30bcca19bbb4897d2f308afe3818b909b40e70e6263509f2093b15ed" Jan 23 23:58:48.767811 containerd[2024]: time="2026-01-23T23:58:48.767592735Z" level=info msg="CreateContainer within sandbox \"8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:58:48.796974 containerd[2024]: time="2026-01-23T23:58:48.796771755Z" level=info msg="CreateContainer within sandbox \"8699028647641111ffd7a81703769235bb64c2e07ac3a55defd100ac62fbe41c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bacebfd499a76072716fdd5863b504d0f96a97fb1cf72815d00ffb043d3a87aa\"" Jan 23 23:58:48.799930 containerd[2024]: time="2026-01-23T23:58:48.799335939Z" level=info msg="StartContainer for \"bacebfd499a76072716fdd5863b504d0f96a97fb1cf72815d00ffb043d3a87aa\"" Jan 23 23:58:48.834669 containerd[2024]: time="2026-01-23T23:58:48.834608007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:48.868687 systemd[1]: Started cri-containerd-bacebfd499a76072716fdd5863b504d0f96a97fb1cf72815d00ffb043d3a87aa.scope - libcontainer container bacebfd499a76072716fdd5863b504d0f96a97fb1cf72815d00ffb043d3a87aa. Jan 23 23:58:48.943737 containerd[2024]: time="2026-01-23T23:58:48.942894688Z" level=info msg="StartContainer for \"bacebfd499a76072716fdd5863b504d0f96a97fb1cf72815d00ffb043d3a87aa\" returns successfully" Jan 23 23:58:49.120113 kubelet[3241]: E0123 23:58:49.118287 3241 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:58:49.121574 containerd[2024]: time="2026-01-23T23:58:49.121294344Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:49.125231 containerd[2024]: time="2026-01-23T23:58:49.125154672Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:49.125467 containerd[2024]: time="2026-01-23T23:58:49.125302428Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:49.126132 kubelet[3241]: E0123 23:58:49.125711 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:49.126132 kubelet[3241]: E0123 23:58:49.125791 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:49.126132 kubelet[3241]: E0123 23:58:49.125980 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n6td6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-t76qf_calico-apiserver(74258f81-20b6-4c16-8e17-d994c72b6c19): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:49.127356 kubelet[3241]: E0123 23:58:49.127238 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19" Jan 23 23:58:49.836669 containerd[2024]: time="2026-01-23T23:58:49.835461880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:50.123834 containerd[2024]: time="2026-01-23T23:58:50.123705073Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:50.126585 containerd[2024]: time="2026-01-23T23:58:50.126416149Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:50.126585 containerd[2024]: time="2026-01-23T23:58:50.126539077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:50.127446 kubelet[3241]: E0123 23:58:50.127075 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:50.127446 kubelet[3241]: E0123 23:58:50.127165 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:50.128637 kubelet[3241]: E0123 23:58:50.127863 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2fbcb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6976454ff7-ddg9z_calico-apiserver(7d79c384-4d50-4538-9d9a-312b65c47eb8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:50.129287 kubelet[3241]: E0123 23:58:50.129222 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:58:52.828462 containerd[2024]: time="2026-01-23T23:58:52.828083275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:53.091447 containerd[2024]: time="2026-01-23T23:58:53.091030972Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:53.093430 containerd[2024]: time="2026-01-23T23:58:53.093226408Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:53.093430 containerd[2024]: time="2026-01-23T23:58:53.093377968Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:53.094174 kubelet[3241]: E0123 23:58:53.093727 3241 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:53.094174 kubelet[3241]: E0123 23:58:53.093795 3241 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:53.094174 kubelet[3241]: E0123 23:58:53.094040 3241 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-47fj5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-q6gtf_calico-system(40024e0b-dc12-464a-9bd9-6f315f803fe4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:53.095362 kubelet[3241]: E0123 23:58:53.095238 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-q6gtf" podUID="40024e0b-dc12-464a-9bd9-6f315f803fe4" Jan 23 23:58:55.405173 systemd[1]: cri-containerd-84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b.scope: Deactivated successfully. Jan 23 23:58:55.444947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b-rootfs.mount: Deactivated successfully. Jan 23 23:58:55.458857 containerd[2024]: time="2026-01-23T23:58:55.458595740Z" level=info msg="shim disconnected" id=84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b namespace=k8s.io Jan 23 23:58:55.458857 containerd[2024]: time="2026-01-23T23:58:55.458709008Z" level=warning msg="cleaning up after shim disconnected" id=84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b namespace=k8s.io Jan 23 23:58:55.458857 containerd[2024]: time="2026-01-23T23:58:55.458731904Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:55.789710 kubelet[3241]: I0123 23:58:55.788367 3241 scope.go:117] "RemoveContainer" containerID="fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0" Jan 23 23:58:55.789710 kubelet[3241]: I0123 23:58:55.788880 3241 scope.go:117] "RemoveContainer" containerID="84e4b014b537e50830d670eebfe6ebd3c379a7ded9fa3076c837f73d479c5a6b" Jan 23 23:58:55.789710 kubelet[3241]: E0123 23:58:55.789128 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-lpm27_tigera-operator(18abfb99-8729-47bd-a6b5-01ded22e2bca)\"" pod="tigera-operator/tigera-operator-7dcd859c48-lpm27" podUID="18abfb99-8729-47bd-a6b5-01ded22e2bca" Jan 23 23:58:55.792354 containerd[2024]: time="2026-01-23T23:58:55.792078970Z" level=info msg="RemoveContainer for \"fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0\"" Jan 23 23:58:55.800918 containerd[2024]: time="2026-01-23T23:58:55.800569330Z" level=info msg="RemoveContainer for \"fd6d9eebcec74f2f298992dd55260fa71f17eb7b48cac2ddbe06ca667c3872c0\" returns successfully" Jan 23 23:58:57.829264 kubelet[3241]: E0123 23:58:57.829033 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-6d6bcfdb8b-dvgwk" podUID="e0693b48-91c3-4d6b-a757-c65fc3ee493a" Jan 23 23:58:59.120509 kubelet[3241]: E0123 23:58:59.120423 3241 controller.go:195] "Failed to update lease" err="Put \"https://172.31.28.204:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-28-204?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:59:00.829300 kubelet[3241]: E0123 23:59:00.829218 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-rn45p" podUID="46c86ab0-1223-4a22-bfcf-7f463abcf340" Jan 23 23:59:01.829400 kubelet[3241]: E0123 23:59:01.829142 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74cfd6877d-hr9jw" podUID="64d067e9-db06-43a4-8ec2-5418bd9de44b" Jan 23 23:59:01.829400 kubelet[3241]: E0123 23:59:01.829307 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-ddg9z" podUID="7d79c384-4d50-4538-9d9a-312b65c47eb8" Jan 23 23:59:03.827866 kubelet[3241]: E0123 23:59:03.827782 3241 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6976454ff7-t76qf" podUID="74258f81-20b6-4c16-8e17-d994c72b6c19"