Jan 23 23:54:37.235710 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:54:37.235757 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:54:37.235799 kernel: KASLR disabled due to lack of seed Jan 23 23:54:37.235820 kernel: efi: EFI v2.7 by EDK II Jan 23 23:54:37.235837 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:54:37.235853 kernel: ACPI: Early table checksum verification disabled Jan 23 23:54:37.235871 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:54:37.235887 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:54:37.235903 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:54:37.235918 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:54:37.235966 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:54:37.235983 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:54:37.235998 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:54:37.236015 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:54:37.236033 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:54:37.236056 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:54:37.236073 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:54:37.236089 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:54:37.236106 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:54:37.236122 kernel: printk: bootconsole [uart0] enabled Jan 23 23:54:37.236138 kernel: NUMA: Failed to initialise from firmware Jan 23 23:54:37.236155 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:54:37.236172 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:54:37.236188 kernel: Zone ranges: Jan 23 23:54:37.236204 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:54:37.236220 kernel: DMA32 empty Jan 23 23:54:37.236241 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:54:37.236257 kernel: Movable zone start for each node Jan 23 23:54:37.236273 kernel: Early memory node ranges Jan 23 23:54:37.236290 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:54:37.236306 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:54:37.236322 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:54:37.236338 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:54:37.236355 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:54:37.236372 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:54:37.236388 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:54:37.236405 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:54:37.236421 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:54:37.236442 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:54:37.236459 kernel: psci: probing for conduit method from ACPI. Jan 23 23:54:37.236482 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:54:37.236500 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:54:37.236517 kernel: psci: Trusted OS migration not required Jan 23 23:54:37.236539 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:54:37.236557 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:54:37.236574 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:54:37.236592 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:54:37.236609 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:54:37.236627 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:54:37.236644 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:54:37.236661 kernel: CPU features: detected: Spectre-v2 Jan 23 23:54:37.236678 kernel: CPU features: detected: Spectre-v3a Jan 23 23:54:37.236695 kernel: CPU features: detected: Spectre-BHB Jan 23 23:54:37.236713 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:54:37.236734 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:54:37.236752 kernel: alternatives: applying boot alternatives Jan 23 23:54:37.236772 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:54:37.236790 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:54:37.236807 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:54:37.236825 kernel: Fallback order for Node 0: 0 Jan 23 23:54:37.236842 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:54:37.236859 kernel: Policy zone: Normal Jan 23 23:54:37.236876 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:54:37.236893 kernel: software IO TLB: area num 2. Jan 23 23:54:37.236911 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:54:37.242290 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:54:37.242324 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:54:37.242342 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:54:37.242361 kernel: rcu: RCU event tracing is enabled. Jan 23 23:54:37.242379 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:54:37.242397 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:54:37.242414 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:54:37.242432 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:54:37.242450 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:54:37.242467 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:54:37.242484 kernel: GICv3: 96 SPIs implemented Jan 23 23:54:37.242511 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:54:37.242529 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:54:37.242546 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:54:37.242564 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:54:37.242581 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:54:37.242598 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:54:37.242616 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:54:37.242634 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:54:37.242651 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:54:37.242668 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:54:37.242686 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:54:37.242703 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:54:37.242725 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:54:37.242743 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:54:37.242760 kernel: Console: colour dummy device 80x25 Jan 23 23:54:37.242779 kernel: printk: console [tty1] enabled Jan 23 23:54:37.242797 kernel: ACPI: Core revision 20230628 Jan 23 23:54:37.242815 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:54:37.242833 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:54:37.242851 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:54:37.242868 kernel: landlock: Up and running. Jan 23 23:54:37.242890 kernel: SELinux: Initializing. Jan 23 23:54:37.242908 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:54:37.242926 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:54:37.242985 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:54:37.243004 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:54:37.243022 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:54:37.243040 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:54:37.243058 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:54:37.243076 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:54:37.243100 kernel: Remapping and enabling EFI services. Jan 23 23:54:37.243118 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:54:37.243136 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:54:37.243154 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:54:37.243172 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:54:37.243189 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:54:37.243207 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:54:37.243224 kernel: SMP: Total of 2 processors activated. Jan 23 23:54:37.243242 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:54:37.243264 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:54:37.243282 kernel: CPU features: detected: CRC32 instructions Jan 23 23:54:37.243300 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:54:37.243329 kernel: alternatives: applying system-wide alternatives Jan 23 23:54:37.243351 kernel: devtmpfs: initialized Jan 23 23:54:37.243370 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:54:37.243388 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:54:37.243406 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:54:37.243425 kernel: SMBIOS 3.0.0 present. Jan 23 23:54:37.243448 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:54:37.243466 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:54:37.243485 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:54:37.243503 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:54:37.243522 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:54:37.243541 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:54:37.243559 kernel: audit: type=2000 audit(0.286:1): state=initialized audit_enabled=0 res=1 Jan 23 23:54:37.243578 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:54:37.243601 kernel: cpuidle: using governor menu Jan 23 23:54:37.243619 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:54:37.243637 kernel: ASID allocator initialised with 65536 entries Jan 23 23:54:37.243656 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:54:37.243674 kernel: Serial: AMBA PL011 UART driver Jan 23 23:54:37.243692 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:54:37.243710 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:54:37.243729 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:54:37.243747 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:54:37.243770 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:54:37.243808 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:54:37.243829 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:54:37.243847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:54:37.243866 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:54:37.243884 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:54:37.243903 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:54:37.243922 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:54:37.252732 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:54:37.252772 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:54:37.252792 kernel: ACPI: Interpreter enabled Jan 23 23:54:37.252811 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:54:37.252830 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:54:37.252848 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:54:37.253338 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:54:37.253647 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:54:37.253853 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:54:37.255733 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:54:37.256000 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:54:37.256027 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:54:37.256047 kernel: acpiphp: Slot [1] registered Jan 23 23:54:37.256066 kernel: acpiphp: Slot [2] registered Jan 23 23:54:37.256085 kernel: acpiphp: Slot [3] registered Jan 23 23:54:37.256103 kernel: acpiphp: Slot [4] registered Jan 23 23:54:37.256122 kernel: acpiphp: Slot [5] registered Jan 23 23:54:37.256149 kernel: acpiphp: Slot [6] registered Jan 23 23:54:37.256168 kernel: acpiphp: Slot [7] registered Jan 23 23:54:37.256186 kernel: acpiphp: Slot [8] registered Jan 23 23:54:37.256204 kernel: acpiphp: Slot [9] registered Jan 23 23:54:37.256223 kernel: acpiphp: Slot [10] registered Jan 23 23:54:37.256242 kernel: acpiphp: Slot [11] registered Jan 23 23:54:37.256260 kernel: acpiphp: Slot [12] registered Jan 23 23:54:37.256278 kernel: acpiphp: Slot [13] registered Jan 23 23:54:37.256296 kernel: acpiphp: Slot [14] registered Jan 23 23:54:37.256314 kernel: acpiphp: Slot [15] registered Jan 23 23:54:37.256337 kernel: acpiphp: Slot [16] registered Jan 23 23:54:37.256356 kernel: acpiphp: Slot [17] registered Jan 23 23:54:37.256374 kernel: acpiphp: Slot [18] registered Jan 23 23:54:37.256392 kernel: acpiphp: Slot [19] registered Jan 23 23:54:37.256411 kernel: acpiphp: Slot [20] registered Jan 23 23:54:37.256445 kernel: acpiphp: Slot [21] registered Jan 23 23:54:37.256467 kernel: acpiphp: Slot [22] registered Jan 23 23:54:37.256486 kernel: acpiphp: Slot [23] registered Jan 23 23:54:37.256504 kernel: acpiphp: Slot [24] registered Jan 23 23:54:37.256542 kernel: acpiphp: Slot [25] registered Jan 23 23:54:37.256566 kernel: acpiphp: Slot [26] registered Jan 23 23:54:37.256585 kernel: acpiphp: Slot [27] registered Jan 23 23:54:37.256604 kernel: acpiphp: Slot [28] registered Jan 23 23:54:37.256624 kernel: acpiphp: Slot [29] registered Jan 23 23:54:37.256642 kernel: acpiphp: Slot [30] registered Jan 23 23:54:37.256661 kernel: acpiphp: Slot [31] registered Jan 23 23:54:37.256679 kernel: PCI host bridge to bus 0000:00 Jan 23 23:54:37.256906 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:54:37.260615 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:54:37.260810 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:54:37.261120 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:54:37.261365 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:54:37.261596 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:54:37.261818 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:54:37.263756 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:54:37.264101 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:54:37.264323 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:54:37.264556 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:54:37.264766 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:54:37.267111 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:54:37.267381 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:54:37.267611 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:54:37.267835 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:54:37.268132 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:54:37.268322 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:54:37.268349 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:54:37.268369 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:54:37.268389 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:54:37.268408 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:54:37.268435 kernel: iommu: Default domain type: Translated Jan 23 23:54:37.268454 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:54:37.268473 kernel: efivars: Registered efivars operations Jan 23 23:54:37.268492 kernel: vgaarb: loaded Jan 23 23:54:37.268511 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:54:37.268529 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:54:37.268548 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:54:37.268567 kernel: pnp: PnP ACPI init Jan 23 23:54:37.268784 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:54:37.268819 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:54:37.268838 kernel: NET: Registered PF_INET protocol family Jan 23 23:54:37.268857 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:54:37.268876 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:54:37.268895 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:54:37.268914 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:54:37.274806 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:54:37.274845 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:54:37.274875 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:54:37.274895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:54:37.274913 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:54:37.274995 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:54:37.275044 kernel: kvm [1]: HYP mode not available Jan 23 23:54:37.275067 kernel: Initialise system trusted keyrings Jan 23 23:54:37.275086 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:54:37.275105 kernel: Key type asymmetric registered Jan 23 23:54:37.275501 kernel: Asymmetric key parser 'x509' registered Jan 23 23:54:37.275530 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:54:37.275549 kernel: io scheduler mq-deadline registered Jan 23 23:54:37.275568 kernel: io scheduler kyber registered Jan 23 23:54:37.275586 kernel: io scheduler bfq registered Jan 23 23:54:37.275882 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:54:37.275915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:54:37.275984 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:54:37.276007 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:54:37.276027 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:54:37.276053 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:54:37.276073 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:54:37.276301 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:54:37.276331 kernel: printk: console [ttyS0] disabled Jan 23 23:54:37.276352 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:54:37.276372 kernel: printk: console [ttyS0] enabled Jan 23 23:54:37.276391 kernel: printk: bootconsole [uart0] disabled Jan 23 23:54:37.276410 kernel: thunder_xcv, ver 1.0 Jan 23 23:54:37.276429 kernel: thunder_bgx, ver 1.0 Jan 23 23:54:37.276454 kernel: nicpf, ver 1.0 Jan 23 23:54:37.276474 kernel: nicvf, ver 1.0 Jan 23 23:54:37.276704 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:54:37.276904 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:54:36 UTC (1769212476) Jan 23 23:54:37.276951 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:54:37.278984 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:54:37.279009 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:54:37.279028 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:54:37.279057 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:54:37.279076 kernel: Segment Routing with IPv6 Jan 23 23:54:37.279095 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:54:37.279113 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:54:37.279131 kernel: Key type dns_resolver registered Jan 23 23:54:37.279150 kernel: registered taskstats version 1 Jan 23 23:54:37.279168 kernel: Loading compiled-in X.509 certificates Jan 23 23:54:37.279187 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:54:37.279205 kernel: Key type .fscrypt registered Jan 23 23:54:37.279229 kernel: Key type fscrypt-provisioning registered Jan 23 23:54:37.279247 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:54:37.279265 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:54:37.279283 kernel: ima: No architecture policies found Jan 23 23:54:37.279302 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:54:37.279320 kernel: clk: Disabling unused clocks Jan 23 23:54:37.279339 kernel: Freeing unused kernel memory: 39424K Jan 23 23:54:37.279357 kernel: Run /init as init process Jan 23 23:54:37.279375 kernel: with arguments: Jan 23 23:54:37.279398 kernel: /init Jan 23 23:54:37.279417 kernel: with environment: Jan 23 23:54:37.279435 kernel: HOME=/ Jan 23 23:54:37.279453 kernel: TERM=linux Jan 23 23:54:37.279477 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:37.279500 systemd[1]: Detected virtualization amazon. Jan 23 23:54:37.279521 systemd[1]: Detected architecture arm64. Jan 23 23:54:37.279541 systemd[1]: Running in initrd. Jan 23 23:54:37.279565 systemd[1]: No hostname configured, using default hostname. Jan 23 23:54:37.279584 systemd[1]: Hostname set to . Jan 23 23:54:37.279605 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:37.279626 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:54:37.279646 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:37.279666 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:37.279688 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:54:37.279709 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:37.279734 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:54:37.279756 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:54:37.279796 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:54:37.279823 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:54:37.279844 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:37.279864 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:37.279890 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:37.279911 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:37.280982 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:37.281020 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:37.281041 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:37.281064 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:37.281085 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:54:37.281105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:54:37.281126 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:37.281155 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:37.281176 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:37.281196 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:37.281216 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:54:37.281239 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:37.281259 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:54:37.281279 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:54:37.281299 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:37.281320 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:37.281345 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:37.281365 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:37.281430 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:54:37.281475 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:37.281503 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:54:37.281525 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:54:37.281546 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:54:37.281566 kernel: Bridge firewalling registered Jan 23 23:54:37.281591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:37.281611 systemd-journald[251]: Journal started Jan 23 23:54:37.281650 systemd-journald[251]: Runtime Journal (/run/log/journal/ec27fbdc091f3db497a043f67874632a) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:37.232118 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:54:37.265033 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:54:37.302009 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:37.307880 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:37.307896 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:37.313809 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:37.329973 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:37.346379 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:37.355229 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:37.367154 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:37.392691 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:37.406546 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:37.421350 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:54:37.425174 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:37.437271 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:37.461276 dracut-cmdline[289]: dracut-dracut-053 Jan 23 23:54:37.467795 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:54:37.520261 systemd-resolved[291]: Positive Trust Anchors: Jan 23 23:54:37.520296 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:37.520358 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:37.623955 kernel: SCSI subsystem initialized Jan 23 23:54:37.630972 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:54:37.642966 kernel: iscsi: registered transport (tcp) Jan 23 23:54:37.665966 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:54:37.666039 kernel: QLogic iSCSI HBA Driver Jan 23 23:54:37.752226 kernel: random: crng init done Jan 23 23:54:37.752328 systemd-resolved[291]: Defaulting to hostname 'linux'. Jan 23 23:54:37.758400 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:37.764346 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:37.792434 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:37.803369 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:54:37.840221 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:54:37.840298 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:54:37.840325 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:54:37.907978 kernel: raid6: neonx8 gen() 6707 MB/s Jan 23 23:54:37.924965 kernel: raid6: neonx4 gen() 6562 MB/s Jan 23 23:54:37.941967 kernel: raid6: neonx2 gen() 5462 MB/s Jan 23 23:54:37.958966 kernel: raid6: neonx1 gen() 3963 MB/s Jan 23 23:54:37.975966 kernel: raid6: int64x8 gen() 3824 MB/s Jan 23 23:54:37.992966 kernel: raid6: int64x4 gen() 3725 MB/s Jan 23 23:54:38.009966 kernel: raid6: int64x2 gen() 3600 MB/s Jan 23 23:54:38.028016 kernel: raid6: int64x1 gen() 2756 MB/s Jan 23 23:54:38.028069 kernel: raid6: using algorithm neonx8 gen() 6707 MB/s Jan 23 23:54:38.047003 kernel: raid6: .... xor() 4822 MB/s, rmw enabled Jan 23 23:54:38.047048 kernel: raid6: using neon recovery algorithm Jan 23 23:54:38.054969 kernel: xor: measuring software checksum speed Jan 23 23:54:38.057281 kernel: 8regs : 10276 MB/sec Jan 23 23:54:38.057314 kernel: 32regs : 11914 MB/sec Jan 23 23:54:38.058582 kernel: arm64_neon : 9571 MB/sec Jan 23 23:54:38.058614 kernel: xor: using function: 32regs (11914 MB/sec) Jan 23 23:54:38.143984 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:54:38.162829 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:38.176295 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:38.214161 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jan 23 23:54:38.222961 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:38.242621 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:54:38.282754 dracut-pre-trigger[485]: rd.md=0: removing MD RAID activation Jan 23 23:54:38.342844 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:38.356279 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:38.479867 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:38.499235 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:54:38.552060 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:38.566252 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:38.577793 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:38.590449 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:38.612243 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:54:38.652830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:38.683282 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:54:38.683353 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:54:38.687525 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:54:38.687888 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:54:38.691531 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:38.710664 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b4:57:6b:93:2f Jan 23 23:54:38.695631 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:38.696844 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:38.696896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:38.697224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:38.699274 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:38.728482 (udev-worker)[535]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:38.746400 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:38.754893 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:54:38.754977 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:54:38.763966 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:54:38.775149 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:54:38.775215 kernel: GPT:9289727 != 33554431 Jan 23 23:54:38.775240 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:54:38.777880 kernel: GPT:9289727 != 33554431 Jan 23 23:54:38.777972 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:54:38.778971 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:38.786485 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:38.798294 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:38.838515 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:38.894030 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (535) Jan 23 23:54:38.921718 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (519) Jan 23 23:54:38.968657 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:54:39.011543 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:54:39.046232 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:39.053176 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:39.069195 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:39.083579 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:54:39.096244 disk-uuid[665]: Primary Header is updated. Jan 23 23:54:39.096244 disk-uuid[665]: Secondary Entries is updated. Jan 23 23:54:39.096244 disk-uuid[665]: Secondary Header is updated. Jan 23 23:54:39.108977 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:39.117964 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:39.121037 kernel: block device autoloading is deprecated and will be removed. Jan 23 23:54:40.134016 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:40.136195 disk-uuid[666]: The operation has completed successfully. Jan 23 23:54:40.318134 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:54:40.318338 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:54:40.382246 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:54:40.406824 sh[1012]: Success Jan 23 23:54:40.432065 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:54:40.523284 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:54:40.549422 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:54:40.554390 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:54:40.604519 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:54:40.604583 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:40.604610 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:54:40.608002 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:54:40.608039 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:54:40.717968 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:54:40.727468 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:54:40.728046 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:54:40.745328 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:54:40.749663 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:54:40.786783 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:40.786859 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:40.786886 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:40.803992 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:40.827169 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:40.826652 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:54:40.839047 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:54:40.854104 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:54:40.950920 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:40.968316 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:41.031014 systemd-networkd[1216]: lo: Link UP Jan 23 23:54:41.031481 systemd-networkd[1216]: lo: Gained carrier Jan 23 23:54:41.034886 systemd-networkd[1216]: Enumeration completed Jan 23 23:54:41.035077 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:41.036672 systemd-networkd[1216]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:41.036679 systemd-networkd[1216]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:41.042134 systemd[1]: Reached target network.target - Network. Jan 23 23:54:41.060419 systemd-networkd[1216]: eth0: Link UP Jan 23 23:54:41.060432 systemd-networkd[1216]: eth0: Gained carrier Jan 23 23:54:41.060451 systemd-networkd[1216]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:41.084023 systemd-networkd[1216]: eth0: DHCPv4 address 172.31.18.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:41.300279 ignition[1139]: Ignition 2.19.0 Jan 23 23:54:41.302187 ignition[1139]: Stage: fetch-offline Jan 23 23:54:41.303754 ignition[1139]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:41.303795 ignition[1139]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:41.308703 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:41.306346 ignition[1139]: Ignition finished successfully Jan 23 23:54:41.329702 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:54:41.356254 ignition[1226]: Ignition 2.19.0 Jan 23 23:54:41.356285 ignition[1226]: Stage: fetch Jan 23 23:54:41.358183 ignition[1226]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:41.358213 ignition[1226]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:41.358660 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:41.377792 ignition[1226]: PUT result: OK Jan 23 23:54:41.381040 ignition[1226]: parsed url from cmdline: "" Jan 23 23:54:41.381063 ignition[1226]: no config URL provided Jan 23 23:54:41.381078 ignition[1226]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:54:41.381104 ignition[1226]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:54:41.381138 ignition[1226]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:41.383201 ignition[1226]: PUT result: OK Jan 23 23:54:41.383284 ignition[1226]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:54:41.386313 ignition[1226]: GET result: OK Jan 23 23:54:41.401700 unknown[1226]: fetched base config from "system" Jan 23 23:54:41.386562 ignition[1226]: parsing config with SHA512: 5453b745fe9b15a8f9fccf5b79006f503931ab0941d513f57c61e08adde37b5f8ac9b4258f3cc99240466c3c65bcba329b41c26fdbd3d66d05c5713e4bed16d8 Jan 23 23:54:41.401716 unknown[1226]: fetched base config from "system" Jan 23 23:54:41.402816 ignition[1226]: fetch: fetch complete Jan 23 23:54:41.401730 unknown[1226]: fetched user config from "aws" Jan 23 23:54:41.402829 ignition[1226]: fetch: fetch passed Jan 23 23:54:41.409442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:54:41.402914 ignition[1226]: Ignition finished successfully Jan 23 23:54:41.430219 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:54:41.458198 ignition[1232]: Ignition 2.19.0 Jan 23 23:54:41.458703 ignition[1232]: Stage: kargs Jan 23 23:54:41.459447 ignition[1232]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:41.459472 ignition[1232]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:41.459661 ignition[1232]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:41.471025 ignition[1232]: PUT result: OK Jan 23 23:54:41.476604 ignition[1232]: kargs: kargs passed Jan 23 23:54:41.476741 ignition[1232]: Ignition finished successfully Jan 23 23:54:41.480387 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:54:41.494863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:54:41.521649 ignition[1239]: Ignition 2.19.0 Jan 23 23:54:41.521670 ignition[1239]: Stage: disks Jan 23 23:54:41.522820 ignition[1239]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:41.522844 ignition[1239]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:41.523032 ignition[1239]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:41.535981 ignition[1239]: PUT result: OK Jan 23 23:54:41.540700 ignition[1239]: disks: disks passed Jan 23 23:54:41.541029 ignition[1239]: Ignition finished successfully Jan 23 23:54:41.550053 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:54:41.550556 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:41.551880 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:41.553479 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:41.554293 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:41.555078 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:41.589321 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:54:41.634707 systemd-fsck[1248]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:54:41.641615 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:54:41.654144 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:54:41.733958 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:54:41.735366 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:54:41.740069 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:41.756178 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:41.767144 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:54:41.777325 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:54:41.789791 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1267) Jan 23 23:54:41.777416 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:54:41.777467 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:41.804908 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:41.804983 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:41.805013 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:41.797719 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:54:41.816390 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:54:41.826982 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:41.829168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:42.116326 initrd-setup-root[1291]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:54:42.139659 initrd-setup-root[1298]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:54:42.150521 initrd-setup-root[1305]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:54:42.162405 initrd-setup-root[1312]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:54:42.493627 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:42.503296 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:54:42.510231 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:54:42.533687 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:54:42.543047 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:42.569916 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:54:42.587541 ignition[1380]: INFO : Ignition 2.19.0 Jan 23 23:54:42.587541 ignition[1380]: INFO : Stage: mount Jan 23 23:54:42.592486 ignition[1380]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:42.592486 ignition[1380]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:42.592486 ignition[1380]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:42.601877 ignition[1380]: INFO : PUT result: OK Jan 23 23:54:42.602072 systemd-networkd[1216]: eth0: Gained IPv6LL Jan 23 23:54:42.610661 ignition[1380]: INFO : mount: mount passed Jan 23 23:54:42.610661 ignition[1380]: INFO : Ignition finished successfully Jan 23 23:54:42.615404 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:54:42.626251 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:54:42.743396 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:42.775981 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1391) Jan 23 23:54:42.780051 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:42.780102 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:42.780128 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:42.787989 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:42.789787 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:42.826087 ignition[1408]: INFO : Ignition 2.19.0 Jan 23 23:54:42.828499 ignition[1408]: INFO : Stage: files Jan 23 23:54:42.831025 ignition[1408]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:42.833680 ignition[1408]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:42.836861 ignition[1408]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:42.840989 ignition[1408]: INFO : PUT result: OK Jan 23 23:54:42.845734 ignition[1408]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:54:42.852730 ignition[1408]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:54:42.852730 ignition[1408]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:54:42.909065 ignition[1408]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:54:42.913100 ignition[1408]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:54:42.916472 ignition[1408]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:54:42.913496 unknown[1408]: wrote ssh authorized keys file for user: core Jan 23 23:54:42.923225 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:54:42.923225 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 23 23:54:42.923225 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:54:42.923225 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:43.023492 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:43.170633 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:54:43.214032 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 23 23:54:43.673345 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 23 23:54:44.359117 ignition[1408]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:44.364921 ignition[1408]: INFO : files: files passed Jan 23 23:54:44.364921 ignition[1408]: INFO : Ignition finished successfully Jan 23 23:54:44.424769 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:54:44.436658 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:54:44.450293 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:54:44.459114 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:54:44.459511 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:54:44.490614 initrd-setup-root-after-ignition[1437]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:44.490614 initrd-setup-root-after-ignition[1437]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:44.501607 initrd-setup-root-after-ignition[1441]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:44.508761 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:44.512730 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:54:44.527252 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:54:44.605835 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:54:44.606093 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:54:44.612993 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:54:44.619726 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:54:44.622479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:54:44.638967 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:54:44.671697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:44.687461 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:54:44.715224 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:44.718500 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:44.721875 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:54:44.727436 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:54:44.727705 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:44.745738 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:54:44.751355 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:54:44.754113 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:54:44.757365 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:44.768587 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:44.771628 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:54:44.775146 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:44.781473 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:54:44.784634 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:54:44.788377 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:54:44.792536 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:54:44.792794 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:44.799431 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:44.805368 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:44.811447 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:54:44.823633 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:44.836325 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:54:44.836591 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:44.840097 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:54:44.840370 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:44.843802 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:54:44.844105 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:54:44.868432 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:54:44.872118 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:54:44.872426 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:44.891266 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:54:44.896196 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:54:44.896508 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:44.902782 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:54:44.903289 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:44.935380 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:54:44.942255 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:54:44.954700 ignition[1461]: INFO : Ignition 2.19.0 Jan 23 23:54:44.954700 ignition[1461]: INFO : Stage: umount Jan 23 23:54:44.954700 ignition[1461]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:44.954700 ignition[1461]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:44.954700 ignition[1461]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:44.980337 ignition[1461]: INFO : PUT result: OK Jan 23 23:54:44.980337 ignition[1461]: INFO : umount: umount passed Jan 23 23:54:44.980337 ignition[1461]: INFO : Ignition finished successfully Jan 23 23:54:44.962805 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:54:44.973723 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:54:44.975054 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:54:44.993784 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:54:44.993991 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:54:44.997427 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:54:44.997589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:54:45.006207 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:54:45.006327 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:54:45.013977 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:54:45.014074 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:54:45.031635 systemd[1]: Stopped target network.target - Network. Jan 23 23:54:45.034051 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:54:45.034171 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:45.039742 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:54:45.042212 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:54:45.053588 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:45.056956 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:54:45.059254 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:54:45.062229 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:54:45.062326 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:45.065141 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:54:45.065223 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:45.072081 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:54:45.072197 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:54:45.077642 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:54:45.077746 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:45.081134 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:54:45.081234 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:45.093636 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:54:45.098759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:54:45.107004 systemd-networkd[1216]: eth0: DHCPv6 lease lost Jan 23 23:54:45.111528 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:54:45.111782 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:54:45.121399 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:54:45.121536 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:45.146241 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:54:45.151360 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:54:45.151641 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:45.165271 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:45.180271 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:54:45.184644 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:54:45.197585 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:54:45.197981 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:45.212221 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:54:45.212628 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:54:45.224803 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:54:45.224912 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:45.230589 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:54:45.230665 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:45.233557 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:54:45.233647 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:45.236721 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:54:45.236805 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:45.239884 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:45.239989 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:45.272278 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:54:45.275791 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:45.275913 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:45.283082 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:54:45.283200 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:45.291465 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:54:45.301255 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:45.307962 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:54:45.308092 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:45.314353 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:45.314483 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:45.340311 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:54:45.340744 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:54:45.349402 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:54:45.363304 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:54:45.405307 systemd[1]: Switching root. Jan 23 23:54:45.450049 systemd-journald[251]: Journal stopped Jan 23 23:54:48.368664 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:54:48.368811 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:54:48.368856 kernel: SELinux: policy capability open_perms=1 Jan 23 23:54:48.368888 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:54:48.368917 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:54:48.368982 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:54:48.369015 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:54:48.369075 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:54:48.369116 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:54:48.369148 kernel: audit: type=1403 audit(1769212486.364:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:54:48.369181 systemd[1]: Successfully loaded SELinux policy in 66.625ms. Jan 23 23:54:48.369228 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.226ms. Jan 23 23:54:48.369263 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:48.369297 systemd[1]: Detected virtualization amazon. Jan 23 23:54:48.369329 systemd[1]: Detected architecture arm64. Jan 23 23:54:48.369361 systemd[1]: Detected first boot. Jan 23 23:54:48.369393 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:48.369431 zram_generator::config[1520]: No configuration found. Jan 23 23:54:48.369466 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:54:48.369498 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:54:48.369531 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:54:48.369563 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:54:48.369595 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:54:48.369629 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:54:48.369661 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:54:48.369697 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:54:48.369728 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:54:48.369761 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:54:48.369791 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:54:48.369822 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:48.369854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:48.369914 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:54:48.369970 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:54:48.370007 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:54:48.370047 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:48.370079 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:54:48.370110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:48.370140 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:54:48.370171 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:48.370214 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:48.370244 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:48.370277 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:48.370312 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:54:48.370344 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:54:48.370375 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:54:48.370405 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:54:48.370437 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:48.370468 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:48.370501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:48.370531 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:54:48.370562 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:54:48.370598 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:54:48.370628 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:54:48.370660 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:54:48.370718 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:54:48.370752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:54:48.370785 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:54:48.370815 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:48.370846 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:48.370879 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:54:48.370914 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:48.370967 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:48.371004 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:48.371036 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:54:48.371066 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:48.371101 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:54:48.371133 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 23 23:54:48.371166 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 23 23:54:48.371200 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:48.371230 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:48.371260 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:54:48.371294 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:54:48.371323 kernel: loop: module loaded Jan 23 23:54:48.371353 kernel: fuse: init (API version 7.39) Jan 23 23:54:48.371385 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:48.371418 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:54:48.371447 kernel: ACPI: bus type drm_connector registered Jan 23 23:54:48.371475 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:54:48.371536 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:54:48.371569 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:54:48.371598 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:54:48.371635 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:54:48.371665 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:54:48.371695 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:48.371725 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:54:48.371829 systemd-journald[1627]: Collecting audit messages is disabled. Jan 23 23:54:48.371902 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:54:48.371957 systemd-journald[1627]: Journal started Jan 23 23:54:48.372010 systemd-journald[1627]: Runtime Journal (/run/log/journal/ec27fbdc091f3db497a043f67874632a) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:48.377971 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:48.384328 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:48.385140 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:48.389885 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:48.390257 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:48.393591 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:48.393916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:48.397849 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:54:48.398203 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:54:48.401573 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:48.402115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:48.405737 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:48.409447 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:54:48.414767 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:54:48.442588 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:54:48.454232 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:54:48.468163 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:54:48.474139 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:54:48.483219 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:54:48.504481 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:54:48.507587 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:48.515269 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:54:48.518750 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:48.523129 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:48.539983 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:54:48.561819 systemd-journald[1627]: Time spent on flushing to /var/log/journal/ec27fbdc091f3db497a043f67874632a is 86.688ms for 885 entries. Jan 23 23:54:48.561819 systemd-journald[1627]: System Journal (/var/log/journal/ec27fbdc091f3db497a043f67874632a) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:54:48.671373 systemd-journald[1627]: Received client request to flush runtime journal. Jan 23 23:54:48.557337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:54:48.561262 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:54:48.594102 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:54:48.597879 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:54:48.686680 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:54:48.718609 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Jan 23 23:54:48.721671 systemd-tmpfiles[1672]: ACLs are not supported, ignoring. Jan 23 23:54:48.722916 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:48.729462 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:48.746721 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:48.760260 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:54:48.779297 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:54:48.824418 udevadm[1692]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:54:48.863821 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:54:48.881274 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:48.927979 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 23 23:54:48.928672 systemd-tmpfiles[1696]: ACLs are not supported, ignoring. Jan 23 23:54:48.939797 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:49.561916 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:54:49.576292 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:49.643876 systemd-udevd[1702]: Using default interface naming scheme 'v255'. Jan 23 23:54:49.684243 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:49.725449 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:49.782325 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:54:49.839989 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 23 23:54:49.852659 (udev-worker)[1705]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:49.961431 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:54:50.141901 systemd-networkd[1714]: lo: Link UP Jan 23 23:54:50.142657 systemd-networkd[1714]: lo: Gained carrier Jan 23 23:54:50.145918 systemd-networkd[1714]: Enumeration completed Jan 23 23:54:50.147109 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:50.158661 systemd-networkd[1714]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:50.164018 systemd-networkd[1714]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:50.168816 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:54:50.174850 systemd-networkd[1714]: eth0: Link UP Jan 23 23:54:50.176513 systemd-networkd[1714]: eth0: Gained carrier Jan 23 23:54:50.177256 systemd-networkd[1714]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:50.192003 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1725) Jan 23 23:54:50.192183 systemd-networkd[1714]: eth0: DHCPv4 address 172.31.18.35/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:50.213494 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:50.434110 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:54:50.439850 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:50.500617 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:50.521233 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:54:50.562986 lvm[1831]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:50.605177 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:54:50.609741 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:50.621818 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:54:50.639094 lvm[1834]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:50.680265 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:54:50.685151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:50.689050 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:54:50.689115 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:50.691886 systemd[1]: Reached target machines.target - Containers. Jan 23 23:54:50.696909 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:54:50.709536 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:54:50.718283 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:54:50.723594 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:50.736429 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:54:50.745322 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:54:50.764008 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:54:50.770739 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:54:50.807749 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:54:50.810658 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:54:50.825152 kernel: loop0: detected capacity change from 0 to 114328 Jan 23 23:54:50.829331 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:54:50.965732 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:54:50.993961 kernel: loop1: detected capacity change from 0 to 52536 Jan 23 23:54:51.114985 kernel: loop2: detected capacity change from 0 to 114432 Jan 23 23:54:51.231010 kernel: loop3: detected capacity change from 0 to 207008 Jan 23 23:54:51.348969 kernel: loop4: detected capacity change from 0 to 114328 Jan 23 23:54:51.369979 kernel: loop5: detected capacity change from 0 to 52536 Jan 23 23:54:51.383981 kernel: loop6: detected capacity change from 0 to 114432 Jan 23 23:54:51.395981 kernel: loop7: detected capacity change from 0 to 207008 Jan 23 23:54:51.418732 (sd-merge)[1856]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:54:51.419806 (sd-merge)[1856]: Merged extensions into '/usr'. Jan 23 23:54:51.428794 systemd[1]: Reloading requested from client PID 1842 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:54:51.428827 systemd[1]: Reloading... Jan 23 23:54:51.575970 zram_generator::config[1885]: No configuration found. Jan 23 23:54:51.878475 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:52.047514 systemd[1]: Reloading finished in 617 ms. Jan 23 23:54:52.072016 ldconfig[1838]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:54:52.072762 systemd-networkd[1714]: eth0: Gained IPv6LL Jan 23 23:54:52.082588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:54:52.089882 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:54:52.095086 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:54:52.119484 systemd[1]: Starting ensure-sysext.service... Jan 23 23:54:52.128417 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:52.141304 systemd[1]: Reloading requested from client PID 1945 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:54:52.141333 systemd[1]: Reloading... Jan 23 23:54:52.193565 systemd-tmpfiles[1946]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:54:52.194336 systemd-tmpfiles[1946]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:54:52.198267 systemd-tmpfiles[1946]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:54:52.198897 systemd-tmpfiles[1946]: ACLs are not supported, ignoring. Jan 23 23:54:52.199122 systemd-tmpfiles[1946]: ACLs are not supported, ignoring. Jan 23 23:54:52.209812 systemd-tmpfiles[1946]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:52.209847 systemd-tmpfiles[1946]: Skipping /boot Jan 23 23:54:52.244866 systemd-tmpfiles[1946]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:52.244903 systemd-tmpfiles[1946]: Skipping /boot Jan 23 23:54:52.340982 zram_generator::config[1976]: No configuration found. Jan 23 23:54:52.587289 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:52.750051 systemd[1]: Reloading finished in 607 ms. Jan 23 23:54:52.781014 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:52.808447 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:52.822146 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:54:52.839505 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:54:52.857354 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:52.872833 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:54:52.900715 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:52.913835 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:52.933646 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:52.961510 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:52.969020 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:52.979318 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:54:52.986797 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:52.991973 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:53.011853 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:53.013468 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:53.027878 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:53.029378 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:53.057553 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:53.068410 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:53.079458 augenrules[2067]: No rules Jan 23 23:54:53.090375 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:53.111442 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:53.117317 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:53.139777 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:54:53.150154 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:53.159868 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:54:53.172341 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:53.172727 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:53.178758 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:53.179194 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:53.186547 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:53.187062 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:53.212215 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:54:53.237521 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:54:53.255829 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:53.266375 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:53.274421 systemd-resolved[2041]: Positive Trust Anchors: Jan 23 23:54:53.274466 systemd-resolved[2041]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:53.274531 systemd-resolved[2041]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:53.284303 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:53.295278 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:53.301815 systemd-resolved[2041]: Defaulting to hostname 'linux'. Jan 23 23:54:53.313306 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:53.318977 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:53.319112 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:54:53.324552 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:54:53.326531 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:53.340712 systemd[1]: Finished ensure-sysext.service. Jan 23 23:54:53.346279 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:53.346651 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:53.352598 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:53.353031 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:53.357532 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:53.357924 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:53.361708 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:53.362395 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:53.372614 systemd[1]: Reached target network.target - Network. Jan 23 23:54:53.377373 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:54:53.381056 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:53.384991 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:53.385218 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:53.389239 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:54:53.392714 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:54:53.396491 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:54:53.399652 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:54:53.403100 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:54:53.406463 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:54:53.406516 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:53.409392 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:53.412914 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:54:53.418381 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:54:53.424476 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:54:53.427700 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:53.430925 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:54:53.436114 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:53.440162 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:53.443236 systemd[1]: System is tainted: cgroupsv1 Jan 23 23:54:53.443337 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:53.443392 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:53.453259 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:54:53.462259 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:54:53.479338 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:54:53.487152 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:54:53.500288 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:54:53.507122 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:54:53.528193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:53.538113 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:54:53.557668 jq[2111]: false Jan 23 23:54:53.567168 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:54:53.597734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:54:53.623116 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:54:53.642025 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:54:53.658262 extend-filesystems[2112]: Found loop4 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found loop5 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found loop6 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found loop7 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p1 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p2 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p3 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found usr Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p4 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p6 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p7 Jan 23 23:54:53.675980 extend-filesystems[2112]: Found nvme0n1p9 Jan 23 23:54:53.675980 extend-filesystems[2112]: Checking size of /dev/nvme0n1p9 Jan 23 23:54:53.664083 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:54:53.718011 dbus-daemon[2110]: [system] SELinux support is enabled Jan 23 23:54:53.722262 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:54:53.754922 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:54:53.763603 dbus-daemon[2110]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1714 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:53.765688 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:54:53.779419 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:54:53.825318 extend-filesystems[2112]: Resized partition /dev/nvme0n1p9 Jan 23 23:54:53.828192 coreos-metadata[2108]: Jan 23 23:54:53.822 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:53.828192 coreos-metadata[2108]: Jan 23 23:54:53.828 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.833 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.833 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.834 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.834 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.835 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.835 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.836 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.836 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.838 INFO Fetch failed with 404: resource not found Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.838 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.839 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.839 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.840 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.840 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.843 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.844 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.844 INFO Fetch successful Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.844 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:54:53.857572 coreos-metadata[2108]: Jan 23 23:54:53.844 INFO Fetch successful Jan 23 23:54:53.857484 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:54:53.863325 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:54:53.885972 extend-filesystems[2147]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:54:53.882872 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:54:53.911349 ntpd[2118]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:53.883411 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: ---------------------------------------------------- Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: corporation. Support and training for ntp-4 are Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: available at https://www.nwtime.org/support Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: ---------------------------------------------------- Jan 23 23:54:53.926335 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: proto: precision = 0.108 usec (-23) Jan 23 23:54:53.911403 ntpd[2118]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:53.915769 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: basedate set to 2026-01-11 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen normally on 3 eth0 172.31.18.35:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listen normally on 5 eth0 [fe80::4b4:57ff:fe6b:932f%2]:123 Jan 23 23:54:53.960927 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: Listening on routing socket on fd #22 for interface updates Jan 23 23:54:53.911425 ntpd[2118]: ---------------------------------------------------- Jan 23 23:54:53.916376 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:54:53.911445 ntpd[2118]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:53.941023 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:54:53.979118 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:54:53.911464 ntpd[2118]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:53.979310 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:53.979310 ntpd[2118]: 23 Jan 23:54:53 ntpd[2118]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:53.946850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:54:53.911483 ntpd[2118]: corporation. Support and training for ntp-4 are Jan 23 23:54:53.947507 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:54:53.911502 ntpd[2118]: available at https://www.nwtime.org/support Jan 23 23:54:53.911524 ntpd[2118]: ---------------------------------------------------- Jan 23 23:54:53.921806 ntpd[2118]: proto: precision = 0.108 usec (-23) Jan 23 23:54:53.928304 ntpd[2118]: basedate set to 2026-01-11 Jan 23 23:54:53.928340 ntpd[2118]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:53.935024 ntpd[2118]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:53.935110 ntpd[2118]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:53.935389 ntpd[2118]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:53.935455 ntpd[2118]: Listen normally on 3 eth0 172.31.18.35:123 Jan 23 23:54:53.935523 ntpd[2118]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:53.935601 ntpd[2118]: Listen normally on 5 eth0 [fe80::4b4:57ff:fe6b:932f%2]:123 Jan 23 23:54:53.935672 ntpd[2118]: Listening on routing socket on fd #22 for interface updates Jan 23 23:54:53.966756 ntpd[2118]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:53.999536 jq[2149]: true Jan 23 23:54:53.966825 ntpd[2118]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:54.064467 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:54:54.064581 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:54:54.070615 dbus-daemon[2110]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:54:54.071260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:54:54.071330 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:54:54.112593 (ntainerd)[2180]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:54:54.125379 jq[2169]: true Jan 23 23:54:54.119215 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:54:54.185795 update_engine[2137]: I20260123 23:54:54.184859 2137 main.cc:92] Flatcar Update Engine starting Jan 23 23:54:54.195422 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:54:54.195580 tar[2159]: linux-arm64/LICENSE Jan 23 23:54:54.196711 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:54:54.246511 update_engine[2137]: I20260123 23:54:54.211237 2137 update_check_scheduler.cc:74] Next update check in 9m18s Jan 23 23:54:54.214864 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:54:54.222056 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:54:54.235780 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:54:54.285178 tar[2159]: linux-arm64/helm Jan 23 23:54:54.285307 extend-filesystems[2147]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:54:54.285307 extend-filesystems[2147]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:54:54.285307 extend-filesystems[2147]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:54:54.303448 extend-filesystems[2112]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:54:54.366964 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (2201) Jan 23 23:54:54.432603 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:54:54.438387 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:54:54.438893 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:54:54.447188 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:54:54.455955 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:54:54.481053 systemd-logind[2134]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:54:54.481150 systemd-logind[2134]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:54:54.485232 systemd-logind[2134]: New seat seat0. Jan 23 23:54:54.486752 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:54:54.506069 bash[2246]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:54.509790 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:54:54.546519 systemd[1]: Starting sshkeys.service... Jan 23 23:54:54.644950 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:54:54.713497 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:54:54.885581 amazon-ssm-agent[2240]: Initializing new seelog logger Jan 23 23:54:54.885581 amazon-ssm-agent[2240]: New Seelog Logger Creation Complete Jan 23 23:54:54.885581 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.885581 amazon-ssm-agent[2240]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.885581 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 processing appconfig overrides Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 processing appconfig overrides Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 processing appconfig overrides Jan 23 23:54:54.893244 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO Proxy environment variables: Jan 23 23:54:54.898915 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.898915 amazon-ssm-agent[2240]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:54.898915 amazon-ssm-agent[2240]: 2026/01/23 23:54:54 processing appconfig overrides Jan 23 23:54:54.992034 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO https_proxy: Jan 23 23:54:55.093049 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO http_proxy: Jan 23 23:54:55.148243 dbus-daemon[2110]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:54:55.148727 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:54:55.163567 coreos-metadata[2272]: Jan 23 23:54:55.160 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:55.166206 dbus-daemon[2110]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=2190 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:55.167258 coreos-metadata[2272]: Jan 23 23:54:55.167 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:54:55.186123 coreos-metadata[2272]: Jan 23 23:54:55.184 INFO Fetch successful Jan 23 23:54:55.186123 coreos-metadata[2272]: Jan 23 23:54:55.184 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:54:55.192090 coreos-metadata[2272]: Jan 23 23:54:55.192 INFO Fetch successful Jan 23 23:54:55.195753 unknown[2272]: wrote ssh authorized keys file for user: core Jan 23 23:54:55.221745 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO no_proxy: Jan 23 23:54:55.221920 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:54:55.271081 containerd[2180]: time="2026-01-23T23:54:55.270892165Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:54:55.299111 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:54:55.321854 update-ssh-keys[2325]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:55.333369 polkitd[2316]: Started polkitd version 121 Jan 23 23:54:55.340888 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:54:55.372290 systemd[1]: Finished sshkeys.service. Jan 23 23:54:55.400480 containerd[2180]: time="2026-01-23T23:54:55.400131973Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.406307 amazon-ssm-agent[2240]: 2026-01-23 23:54:54 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.412443805Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.412526593Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.412570537Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.412990609Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.413048557Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.413232037Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:55.413712 containerd[2180]: time="2026-01-23T23:54:55.413266273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.416113 containerd[2180]: time="2026-01-23T23:54:55.416045785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:55.417378 containerd[2180]: time="2026-01-23T23:54:55.416286589Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.417378 containerd[2180]: time="2026-01-23T23:54:55.416346325Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:55.417378 containerd[2180]: time="2026-01-23T23:54:55.416378461Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.417378 containerd[2180]: time="2026-01-23T23:54:55.416639173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.417378 containerd[2180]: time="2026-01-23T23:54:55.417259081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:55.425747 containerd[2180]: time="2026-01-23T23:54:55.424354561Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:55.425990 containerd[2180]: time="2026-01-23T23:54:55.425912762Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:54:55.427536 containerd[2180]: time="2026-01-23T23:54:55.427478162Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:54:55.430898 containerd[2180]: time="2026-01-23T23:54:55.429123398Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:54:55.450656 polkitd[2316]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:54:55.450797 polkitd[2316]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.453986894Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.454113986Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.454260386Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.454305110Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.454344134Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.454640894Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.456511778Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.456817634Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.456856514Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:54:55.457343 containerd[2180]: time="2026-01-23T23:54:55.456888602Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.456920726Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463243610Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463289630Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463328822Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463367258Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463402406Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463433666Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463462094Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463518830Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463554626Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463611722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463653194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463684202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.464968 containerd[2180]: time="2026-01-23T23:54:55.463718882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463774562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463808006Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463844786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463884146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463915610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.463979642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464014250Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464073674Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464125442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464157338Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464188454Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464441258Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464490038Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:54:55.465682 containerd[2180]: time="2026-01-23T23:54:55.464518802Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:54:55.466381 containerd[2180]: time="2026-01-23T23:54:55.464549726Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:54:55.466381 containerd[2180]: time="2026-01-23T23:54:55.464581850Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.466381 containerd[2180]: time="2026-01-23T23:54:55.464624318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:54:55.466381 containerd[2180]: time="2026-01-23T23:54:55.464651306Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:54:55.466381 containerd[2180]: time="2026-01-23T23:54:55.464679098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:54:55.484254 polkitd[2316]: Finished loading, compiling and executing 2 rules Jan 23 23:54:55.489457 containerd[2180]: time="2026-01-23T23:54:55.478576418Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:54:55.489457 containerd[2180]: time="2026-01-23T23:54:55.478715042Z" level=info msg="Connect containerd service" Jan 23 23:54:55.489457 containerd[2180]: time="2026-01-23T23:54:55.478776830Z" level=info msg="using legacy CRI server" Jan 23 23:54:55.489457 containerd[2180]: time="2026-01-23T23:54:55.478795274Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:54:55.489457 containerd[2180]: time="2026-01-23T23:54:55.479034638Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:54:55.492746 containerd[2180]: time="2026-01-23T23:54:55.490866578Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495163142Z" level=info msg="Start subscribing containerd event" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495272954Z" level=info msg="Start recovering state" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495410150Z" level=info msg="Start event monitor" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495454706Z" level=info msg="Start snapshots syncer" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495478706Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:54:55.498970 containerd[2180]: time="2026-01-23T23:54:55.495505574Z" level=info msg="Start streaming server" Jan 23 23:54:55.500553 dbus-daemon[2110]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:54:55.501843 containerd[2180]: time="2026-01-23T23:54:55.501789218Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:54:55.506964 containerd[2180]: time="2026-01-23T23:54:55.504133898Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:54:55.510555 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO Agent will take identity from EC2 Jan 23 23:54:55.511526 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:54:55.518204 containerd[2180]: time="2026-01-23T23:54:55.516086798Z" level=info msg="containerd successfully booted in 0.250728s" Jan 23 23:54:55.517310 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:54:55.529059 polkitd[2316]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:54:55.613037 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:55.624735 systemd-hostnamed[2190]: Hostname set to (transient) Jan 23 23:54:55.624739 systemd-resolved[2041]: System hostname changed to 'ip-172-31-18-35'. Jan 23 23:54:55.649361 locksmithd[2199]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:54:55.715861 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:55.817836 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:55.917533 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:54:56.018148 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:54:56.117227 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:54:56.219520 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:54:56.324038 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [Registrar] Starting registrar module Jan 23 23:54:56.433263 amazon-ssm-agent[2240]: 2026-01-23 23:54:55 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:54:56.453128 amazon-ssm-agent[2240]: 2026-01-23 23:54:56 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:54:56.453128 amazon-ssm-agent[2240]: 2026-01-23 23:54:56 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:54:56.453290 amazon-ssm-agent[2240]: 2026-01-23 23:54:56 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:54:56.453290 amazon-ssm-agent[2240]: 2026-01-23 23:54:56 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:54:56.533078 amazon-ssm-agent[2240]: 2026-01-23 23:54:56 INFO [CredentialRefresher] Next credential rotation will be in 30.191657900866666 minutes Jan 23 23:54:56.562904 tar[2159]: linux-arm64/README.md Jan 23 23:54:56.599887 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:54:57.507895 amazon-ssm-agent[2240]: 2026-01-23 23:54:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:54:57.566293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:57.589048 (kubelet)[2383]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:57.609958 amazon-ssm-agent[2240]: 2026-01-23 23:54:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2376) started Jan 23 23:54:57.710302 amazon-ssm-agent[2240]: 2026-01-23 23:54:57 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:54:58.357989 sshd_keygen[2151]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:54:58.405175 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:54:58.423536 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:54:58.456845 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:54:58.457586 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:54:58.474561 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:54:58.509785 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:54:58.525519 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:54:58.537749 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:54:58.546359 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:54:58.553197 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:54:58.553827 systemd[1]: Startup finished in 10.704s (kernel) + 12.256s (userspace) = 22.960s. Jan 23 23:54:58.928886 kubelet[2383]: E0123 23:54:58.928767 2383 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:58.935110 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:58.935580 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:01.195259 systemd-resolved[2041]: Clock change detected. Flushing caches. Jan 23 23:55:02.922060 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:55:02.930945 systemd[1]: Started sshd@0-172.31.18.35:22-4.153.228.146:53282.service - OpenSSH per-connection server daemon (4.153.228.146:53282). Jan 23 23:55:03.471742 sshd[2424]: Accepted publickey for core from 4.153.228.146 port 53282 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:03.475889 sshd[2424]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:03.493565 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:55:03.502914 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:55:03.507488 systemd-logind[2134]: New session 1 of user core. Jan 23 23:55:03.535861 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:55:03.549221 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:55:03.561856 (systemd)[2430]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:55:03.836481 systemd[2430]: Queued start job for default target default.target. Jan 23 23:55:03.837214 systemd[2430]: Created slice app.slice - User Application Slice. Jan 23 23:55:03.837271 systemd[2430]: Reached target paths.target - Paths. Jan 23 23:55:03.837305 systemd[2430]: Reached target timers.target - Timers. Jan 23 23:55:03.845605 systemd[2430]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:55:03.869835 systemd[2430]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:55:03.869960 systemd[2430]: Reached target sockets.target - Sockets. Jan 23 23:55:03.869992 systemd[2430]: Reached target basic.target - Basic System. Jan 23 23:55:03.870100 systemd[2430]: Reached target default.target - Main User Target. Jan 23 23:55:03.870164 systemd[2430]: Startup finished in 296ms. Jan 23 23:55:03.871410 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:55:03.877261 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:55:04.249900 systemd[1]: Started sshd@1-172.31.18.35:22-4.153.228.146:53292.service - OpenSSH per-connection server daemon (4.153.228.146:53292). Jan 23 23:55:04.744849 sshd[2442]: Accepted publickey for core from 4.153.228.146 port 53292 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:04.747474 sshd[2442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:04.756850 systemd-logind[2134]: New session 2 of user core. Jan 23 23:55:04.767899 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:55:05.101748 sshd[2442]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:05.109213 systemd[1]: sshd@1-172.31.18.35:22-4.153.228.146:53292.service: Deactivated successfully. Jan 23 23:55:05.109797 systemd-logind[2134]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:55:05.116343 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:55:05.118787 systemd-logind[2134]: Removed session 2. Jan 23 23:55:05.187899 systemd[1]: Started sshd@2-172.31.18.35:22-4.153.228.146:39100.service - OpenSSH per-connection server daemon (4.153.228.146:39100). Jan 23 23:55:05.702887 sshd[2450]: Accepted publickey for core from 4.153.228.146 port 39100 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:05.705518 sshd[2450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:05.714316 systemd-logind[2134]: New session 3 of user core. Jan 23 23:55:05.724948 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:55:06.057484 sshd[2450]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:06.062610 systemd-logind[2134]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:55:06.063817 systemd[1]: sshd@2-172.31.18.35:22-4.153.228.146:39100.service: Deactivated successfully. Jan 23 23:55:06.071157 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:55:06.073292 systemd-logind[2134]: Removed session 3. Jan 23 23:55:06.148836 systemd[1]: Started sshd@3-172.31.18.35:22-4.153.228.146:39112.service - OpenSSH per-connection server daemon (4.153.228.146:39112). Jan 23 23:55:06.647008 sshd[2458]: Accepted publickey for core from 4.153.228.146 port 39112 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:06.649645 sshd[2458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:06.658156 systemd-logind[2134]: New session 4 of user core. Jan 23 23:55:06.664035 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:55:07.008779 sshd[2458]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:07.013924 systemd-logind[2134]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:55:07.015981 systemd[1]: sshd@3-172.31.18.35:22-4.153.228.146:39112.service: Deactivated successfully. Jan 23 23:55:07.022537 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:55:07.024746 systemd-logind[2134]: Removed session 4. Jan 23 23:55:07.091950 systemd[1]: Started sshd@4-172.31.18.35:22-4.153.228.146:39120.service - OpenSSH per-connection server daemon (4.153.228.146:39120). Jan 23 23:55:07.624914 sshd[2466]: Accepted publickey for core from 4.153.228.146 port 39120 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:07.627629 sshd[2466]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:07.636637 systemd-logind[2134]: New session 5 of user core. Jan 23 23:55:07.639937 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:55:07.940610 sudo[2470]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:55:07.941251 sudo[2470]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:07.960048 sudo[2470]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:08.044873 sshd[2466]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:08.051013 systemd-logind[2134]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:55:08.051950 systemd[1]: sshd@4-172.31.18.35:22-4.153.228.146:39120.service: Deactivated successfully. Jan 23 23:55:08.058941 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:55:08.060622 systemd-logind[2134]: Removed session 5. Jan 23 23:55:08.137899 systemd[1]: Started sshd@5-172.31.18.35:22-4.153.228.146:39126.service - OpenSSH per-connection server daemon (4.153.228.146:39126). Jan 23 23:55:08.680523 sshd[2475]: Accepted publickey for core from 4.153.228.146 port 39126 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:08.683187 sshd[2475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:08.691199 systemd-logind[2134]: New session 6 of user core. Jan 23 23:55:08.698891 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:55:08.981439 sudo[2480]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:55:08.982130 sudo[2480]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:08.988745 sudo[2480]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:08.999135 sudo[2479]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:55:08.999797 sudo[2479]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:09.021014 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:09.036345 auditctl[2483]: No rules Jan 23 23:55:09.037420 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:55:09.037921 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:09.052087 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:55:09.097027 augenrules[2502]: No rules Jan 23 23:55:09.100768 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:55:09.104894 sudo[2479]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:09.189748 sshd[2475]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:09.196628 systemd-logind[2134]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:55:09.198244 systemd[1]: sshd@5-172.31.18.35:22-4.153.228.146:39126.service: Deactivated successfully. Jan 23 23:55:09.203503 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:55:09.206056 systemd-logind[2134]: Removed session 6. Jan 23 23:55:09.274099 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:55:09.286752 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:09.289952 systemd[1]: Started sshd@6-172.31.18.35:22-4.153.228.146:39140.service - OpenSSH per-connection server daemon (4.153.228.146:39140). Jan 23 23:55:09.713836 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:09.731171 (kubelet)[2525]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:09.814626 kubelet[2525]: E0123 23:55:09.814510 2525 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:09.822865 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:09.823294 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:09.839223 sshd[2512]: Accepted publickey for core from 4.153.228.146 port 39140 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:55:09.842263 sshd[2512]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:55:09.851824 systemd-logind[2134]: New session 7 of user core. Jan 23 23:55:09.864136 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:55:10.137484 sudo[2535]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:55:10.138206 sudo[2535]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:55:10.787928 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:55:10.805267 (dockerd)[2551]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:55:11.336369 dockerd[2551]: time="2026-01-23T23:55:11.336255078Z" level=info msg="Starting up" Jan 23 23:55:11.622773 dockerd[2551]: time="2026-01-23T23:55:11.622124575Z" level=info msg="Loading containers: start." Jan 23 23:55:11.833518 kernel: Initializing XFRM netlink socket Jan 23 23:55:11.897221 (udev-worker)[2573]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:55:11.989641 systemd-networkd[1714]: docker0: Link UP Jan 23 23:55:12.014274 dockerd[2551]: time="2026-01-23T23:55:12.012996605Z" level=info msg="Loading containers: done." Jan 23 23:55:12.039432 dockerd[2551]: time="2026-01-23T23:55:12.038013701Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:55:12.039432 dockerd[2551]: time="2026-01-23T23:55:12.038179709Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:55:12.039432 dockerd[2551]: time="2026-01-23T23:55:12.038369477Z" level=info msg="Daemon has completed initialization" Jan 23 23:55:12.039218 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1360627700-merged.mount: Deactivated successfully. Jan 23 23:55:12.093631 dockerd[2551]: time="2026-01-23T23:55:12.093533874Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:55:12.095119 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:55:13.224608 containerd[2180]: time="2026-01-23T23:55:13.224539327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 23 23:55:13.857378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3948214883.mount: Deactivated successfully. Jan 23 23:55:15.335410 containerd[2180]: time="2026-01-23T23:55:15.335312470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.337802 containerd[2180]: time="2026-01-23T23:55:15.337730662Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 23 23:55:15.339422 containerd[2180]: time="2026-01-23T23:55:15.338332894Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.344977 containerd[2180]: time="2026-01-23T23:55:15.344900542Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:15.347994 containerd[2180]: time="2026-01-23T23:55:15.347904226Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.123281223s" Jan 23 23:55:15.347994 containerd[2180]: time="2026-01-23T23:55:15.347984602Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 23 23:55:15.349308 containerd[2180]: time="2026-01-23T23:55:15.349211494Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 23 23:55:16.832518 containerd[2180]: time="2026-01-23T23:55:16.832453609Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.835686 containerd[2180]: time="2026-01-23T23:55:16.835595833Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.835848 containerd[2180]: time="2026-01-23T23:55:16.835719481Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 23 23:55:16.843297 containerd[2180]: time="2026-01-23T23:55:16.843206485Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.846691 containerd[2180]: time="2026-01-23T23:55:16.846442357Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.497155167s" Jan 23 23:55:16.846691 containerd[2180]: time="2026-01-23T23:55:16.846521485Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 23 23:55:16.848785 containerd[2180]: time="2026-01-23T23:55:16.847957477Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 23 23:55:18.114914 containerd[2180]: time="2026-01-23T23:55:18.114830688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:18.117451 containerd[2180]: time="2026-01-23T23:55:18.117333132Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 23 23:55:18.118716 containerd[2180]: time="2026-01-23T23:55:18.118614408Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:18.125751 containerd[2180]: time="2026-01-23T23:55:18.125678820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:18.129008 containerd[2180]: time="2026-01-23T23:55:18.128728668Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.280634727s" Jan 23 23:55:18.129008 containerd[2180]: time="2026-01-23T23:55:18.128834568Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 23 23:55:18.130597 containerd[2180]: time="2026-01-23T23:55:18.130241952Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 23 23:55:19.602938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065558090.mount: Deactivated successfully. Jan 23 23:55:19.866518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:55:19.880232 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:20.321055 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:20.340487 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:20.465724 kubelet[2780]: E0123 23:55:20.464924 2780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:20.471556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:20.472028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:20.549068 containerd[2180]: time="2026-01-23T23:55:20.548989912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:20.550931 containerd[2180]: time="2026-01-23T23:55:20.550705780Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 23 23:55:20.552437 containerd[2180]: time="2026-01-23T23:55:20.551925244Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:20.556445 containerd[2180]: time="2026-01-23T23:55:20.556169200Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:20.558594 containerd[2180]: time="2026-01-23T23:55:20.558027448Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 2.427678384s" Jan 23 23:55:20.558594 containerd[2180]: time="2026-01-23T23:55:20.558093844Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 23 23:55:20.559183 containerd[2180]: time="2026-01-23T23:55:20.559031320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 23 23:55:21.149514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1308595302.mount: Deactivated successfully. Jan 23 23:55:22.721377 containerd[2180]: time="2026-01-23T23:55:22.717901686Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:22.725719 containerd[2180]: time="2026-01-23T23:55:22.725661102Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 23 23:55:22.731975 containerd[2180]: time="2026-01-23T23:55:22.731905710Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:22.752835 containerd[2180]: time="2026-01-23T23:55:22.752747047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:22.756366 containerd[2180]: time="2026-01-23T23:55:22.756271663Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 2.197180199s" Jan 23 23:55:22.756522 containerd[2180]: time="2026-01-23T23:55:22.756406807Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 23 23:55:22.757311 containerd[2180]: time="2026-01-23T23:55:22.757243687Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 23 23:55:23.874902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860212565.mount: Deactivated successfully. Jan 23 23:55:23.887465 containerd[2180]: time="2026-01-23T23:55:23.887206112Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.889519 containerd[2180]: time="2026-01-23T23:55:23.889424816Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 23:55:23.892534 containerd[2180]: time="2026-01-23T23:55:23.892428884Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.898565 containerd[2180]: time="2026-01-23T23:55:23.898424204Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:23.900751 containerd[2180]: time="2026-01-23T23:55:23.900662132Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 1.143336857s" Jan 23 23:55:23.900751 containerd[2180]: time="2026-01-23T23:55:23.900740612Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 23 23:55:23.902457 containerd[2180]: time="2026-01-23T23:55:23.901621700Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 23 23:55:24.488674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount368441536.mount: Deactivated successfully. Jan 23 23:55:25.927813 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:55:26.795419 containerd[2180]: time="2026-01-23T23:55:26.793676711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:26.796038 containerd[2180]: time="2026-01-23T23:55:26.795811739Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 23 23:55:26.797130 containerd[2180]: time="2026-01-23T23:55:26.797056259Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:26.804425 containerd[2180]: time="2026-01-23T23:55:26.804329219Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:26.807814 containerd[2180]: time="2026-01-23T23:55:26.807753563Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.906049051s" Jan 23 23:55:26.808062 containerd[2180]: time="2026-01-23T23:55:26.808018511Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 23 23:55:30.616862 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 23 23:55:30.625924 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:30.976978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:30.981147 (kubelet)[2935]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:55:31.065013 kubelet[2935]: E0123 23:55:31.064952 2935 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:55:31.071871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:55:31.072468 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:55:33.751288 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:33.768906 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:33.836121 systemd[1]: Reloading requested from client PID 2951 ('systemctl') (unit session-7.scope)... Jan 23 23:55:33.836176 systemd[1]: Reloading... Jan 23 23:55:34.104449 zram_generator::config[2997]: No configuration found. Jan 23 23:55:34.364417 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:34.550029 systemd[1]: Reloading finished in 712 ms. Jan 23 23:55:34.638797 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 23:55:34.639186 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 23:55:34.642034 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:34.654185 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:34.998776 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:35.011131 (kubelet)[3064]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:35.088320 kubelet[3064]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:35.088320 kubelet[3064]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:35.089477 kubelet[3064]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:35.089477 kubelet[3064]: I0123 23:55:35.089155 3064 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:36.212882 kubelet[3064]: I0123 23:55:36.212831 3064 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:55:36.215452 kubelet[3064]: I0123 23:55:36.213604 3064 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:36.215452 kubelet[3064]: I0123 23:55:36.214057 3064 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:55:36.263605 kubelet[3064]: E0123 23:55:36.263537 3064 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.18.35:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:36.266645 kubelet[3064]: I0123 23:55:36.266592 3064 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:36.279868 kubelet[3064]: E0123 23:55:36.279793 3064 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:36.279868 kubelet[3064]: I0123 23:55:36.279856 3064 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:36.287326 kubelet[3064]: I0123 23:55:36.287268 3064 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:55:36.288235 kubelet[3064]: I0123 23:55:36.288161 3064 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:36.288567 kubelet[3064]: I0123 23:55:36.288221 3064 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:55:36.288732 kubelet[3064]: I0123 23:55:36.288710 3064 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:36.288732 kubelet[3064]: I0123 23:55:36.288731 3064 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:55:36.289202 kubelet[3064]: I0123 23:55:36.289156 3064 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:36.295686 kubelet[3064]: I0123 23:55:36.295611 3064 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:55:36.295686 kubelet[3064]: I0123 23:55:36.295684 3064 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:36.295884 kubelet[3064]: I0123 23:55:36.295723 3064 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:55:36.295884 kubelet[3064]: I0123 23:55:36.295744 3064 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:36.309431 kubelet[3064]: W0123 23:55:36.307368 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-35&limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:36.309431 kubelet[3064]: E0123 23:55:36.307528 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-35&limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:36.309431 kubelet[3064]: W0123 23:55:36.308426 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:36.309431 kubelet[3064]: E0123 23:55:36.308511 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:36.309431 kubelet[3064]: I0123 23:55:36.308687 3064 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:36.310314 kubelet[3064]: I0123 23:55:36.310281 3064 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:55:36.310686 kubelet[3064]: W0123 23:55:36.310661 3064 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:55:36.315365 kubelet[3064]: I0123 23:55:36.315317 3064 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:55:36.315620 kubelet[3064]: I0123 23:55:36.315597 3064 server.go:1287] "Started kubelet" Jan 23 23:55:36.324994 kubelet[3064]: I0123 23:55:36.324933 3064 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:36.327681 kubelet[3064]: E0123 23:55:36.326202 3064 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.35:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-35.188d816d9741bada default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-35,UID:ip-172-31-18-35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-35,},FirstTimestamp:2026-01-23 23:55:36.315558618 +0000 UTC m=+1.297630664,LastTimestamp:2026-01-23 23:55:36.315558618 +0000 UTC m=+1.297630664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-35,}" Jan 23 23:55:36.327681 kubelet[3064]: I0123 23:55:36.327078 3064 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:36.327952 kubelet[3064]: I0123 23:55:36.327716 3064 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:36.333383 kubelet[3064]: I0123 23:55:36.333339 3064 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:55:36.334807 kubelet[3064]: I0123 23:55:36.334743 3064 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:36.340350 kubelet[3064]: I0123 23:55:36.340281 3064 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:36.342594 kubelet[3064]: I0123 23:55:36.342543 3064 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:55:36.343198 kubelet[3064]: E0123 23:55:36.343138 3064 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-35\" not found" Jan 23 23:55:36.348047 kubelet[3064]: E0123 23:55:36.347866 3064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": dial tcp 172.31.18.35:6443: connect: connection refused" interval="200ms" Jan 23 23:55:36.348047 kubelet[3064]: I0123 23:55:36.347960 3064 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:55:36.348847 kubelet[3064]: I0123 23:55:36.348807 3064 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:55:36.349166 kubelet[3064]: I0123 23:55:36.349132 3064 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:36.350585 kubelet[3064]: E0123 23:55:36.350537 3064 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:36.353781 kubelet[3064]: I0123 23:55:36.353686 3064 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:55:36.355524 kubelet[3064]: I0123 23:55:36.354571 3064 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:55:36.376815 kubelet[3064]: I0123 23:55:36.376724 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:36.380698 kubelet[3064]: I0123 23:55:36.380636 3064 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:36.380698 kubelet[3064]: I0123 23:55:36.380690 3064 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:55:36.380862 kubelet[3064]: I0123 23:55:36.380726 3064 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:36.380862 kubelet[3064]: I0123 23:55:36.380744 3064 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:55:36.380862 kubelet[3064]: E0123 23:55:36.380817 3064 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:36.398076 kubelet[3064]: W0123 23:55:36.397966 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:36.398228 kubelet[3064]: E0123 23:55:36.398087 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:36.412439 kubelet[3064]: W0123 23:55:36.411061 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:36.412439 kubelet[3064]: E0123 23:55:36.411161 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:36.426999 kubelet[3064]: I0123 23:55:36.426953 3064 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:36.426999 kubelet[3064]: I0123 23:55:36.426992 3064 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:36.427222 kubelet[3064]: I0123 23:55:36.427029 3064 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:36.432603 kubelet[3064]: I0123 23:55:36.432383 3064 policy_none.go:49] "None policy: Start" Jan 23 23:55:36.432603 kubelet[3064]: I0123 23:55:36.432591 3064 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:55:36.432840 kubelet[3064]: I0123 23:55:36.432628 3064 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:55:36.444174 kubelet[3064]: E0123 23:55:36.444091 3064 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-35\" not found" Jan 23 23:55:36.445343 kubelet[3064]: I0123 23:55:36.445272 3064 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:55:36.445711 kubelet[3064]: I0123 23:55:36.445648 3064 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:36.445835 kubelet[3064]: I0123 23:55:36.445690 3064 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:36.451202 kubelet[3064]: I0123 23:55:36.450606 3064 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:36.455600 kubelet[3064]: E0123 23:55:36.455527 3064 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:36.455600 kubelet[3064]: E0123 23:55:36.455609 3064 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-35\" not found" Jan 23 23:55:36.498505 kubelet[3064]: E0123 23:55:36.495866 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:36.498653 kubelet[3064]: E0123 23:55:36.498490 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:36.508032 kubelet[3064]: E0123 23:55:36.507962 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:36.548569 kubelet[3064]: E0123 23:55:36.548493 3064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": dial tcp 172.31.18.35:6443: connect: connection refused" interval="400ms" Jan 23 23:55:36.549318 kubelet[3064]: I0123 23:55:36.549292 3064 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:36.550145 kubelet[3064]: E0123 23:55:36.550092 3064 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.35:6443/api/v1/nodes\": dial tcp 172.31.18.35:6443: connect: connection refused" node="ip-172-31-18-35" Jan 23 23:55:36.558661 kubelet[3064]: I0123 23:55:36.558593 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-ca-certs\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:36.558778 kubelet[3064]: I0123 23:55:36.558669 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:36.558778 kubelet[3064]: I0123 23:55:36.558710 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:36.558778 kubelet[3064]: I0123 23:55:36.558749 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/948760d6256893c5a8499c1604f2e7f0-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-35\" (UID: \"948760d6256893c5a8499c1604f2e7f0\") " pod="kube-system/kube-scheduler-ip-172-31-18-35" Jan 23 23:55:36.558959 kubelet[3064]: I0123 23:55:36.558786 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:36.558959 kubelet[3064]: I0123 23:55:36.558821 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:36.558959 kubelet[3064]: I0123 23:55:36.558854 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:36.558959 kubelet[3064]: I0123 23:55:36.558893 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:36.558959 kubelet[3064]: I0123 23:55:36.558931 3064 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:36.752770 kubelet[3064]: I0123 23:55:36.752630 3064 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:36.753717 kubelet[3064]: E0123 23:55:36.753468 3064 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.35:6443/api/v1/nodes\": dial tcp 172.31.18.35:6443: connect: connection refused" node="ip-172-31-18-35" Jan 23 23:55:36.797905 containerd[2180]: time="2026-01-23T23:55:36.797842328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-35,Uid:d1e8e03c42641968d7b0ff2bb8557412,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:36.803047 containerd[2180]: time="2026-01-23T23:55:36.802655996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-35,Uid:cf5d8bb9dc909505a8bfa246801a7d89,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:36.811293 containerd[2180]: time="2026-01-23T23:55:36.811194932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-35,Uid:948760d6256893c5a8499c1604f2e7f0,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:36.949188 kubelet[3064]: E0123 23:55:36.949109 3064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": dial tcp 172.31.18.35:6443: connect: connection refused" interval="800ms" Jan 23 23:55:37.145636 kubelet[3064]: W0123 23:55:37.145478 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.18.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:37.145636 kubelet[3064]: E0123 23:55:37.145582 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.18.35:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:37.156889 kubelet[3064]: I0123 23:55:37.156350 3064 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:37.156889 kubelet[3064]: E0123 23:55:37.156840 3064 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.35:6443/api/v1/nodes\": dial tcp 172.31.18.35:6443: connect: connection refused" node="ip-172-31-18-35" Jan 23 23:55:37.318368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount883395095.mount: Deactivated successfully. Jan 23 23:55:37.336684 containerd[2180]: time="2026-01-23T23:55:37.336596263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:37.338766 containerd[2180]: time="2026-01-23T23:55:37.338694967Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:37.340775 containerd[2180]: time="2026-01-23T23:55:37.340684063Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:55:37.342784 containerd[2180]: time="2026-01-23T23:55:37.342663763Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:55:37.345446 containerd[2180]: time="2026-01-23T23:55:37.345019855Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:37.348429 containerd[2180]: time="2026-01-23T23:55:37.347818423Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:37.349526 containerd[2180]: time="2026-01-23T23:55:37.349456447Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:55:37.354486 containerd[2180]: time="2026-01-23T23:55:37.354348751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:55:37.358749 containerd[2180]: time="2026-01-23T23:55:37.358691647Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 547.337115ms" Jan 23 23:55:37.363177 containerd[2180]: time="2026-01-23T23:55:37.363098983Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 565.139175ms" Jan 23 23:55:37.367668 containerd[2180]: time="2026-01-23T23:55:37.367333099Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 564.566343ms" Jan 23 23:55:37.468562 kubelet[3064]: E0123 23:55:37.468128 3064 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.35:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.35:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-35.188d816d9741bada default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-35,UID:ip-172-31-18-35,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-35,},FirstTimestamp:2026-01-23 23:55:36.315558618 +0000 UTC m=+1.297630664,LastTimestamp:2026-01-23 23:55:36.315558618 +0000 UTC m=+1.297630664,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-35,}" Jan 23 23:55:37.606437 containerd[2180]: time="2026-01-23T23:55:37.604683764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:37.606437 containerd[2180]: time="2026-01-23T23:55:37.604818248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:37.606437 containerd[2180]: time="2026-01-23T23:55:37.604850924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.610830 containerd[2180]: time="2026-01-23T23:55:37.610604540Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.615207 containerd[2180]: time="2026-01-23T23:55:37.613950932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:37.615207 containerd[2180]: time="2026-01-23T23:55:37.614064716Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:37.615207 containerd[2180]: time="2026-01-23T23:55:37.614102492Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.615207 containerd[2180]: time="2026-01-23T23:55:37.614277140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.619082 containerd[2180]: time="2026-01-23T23:55:37.618155060Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:37.619082 containerd[2180]: time="2026-01-23T23:55:37.618274004Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:37.619082 containerd[2180]: time="2026-01-23T23:55:37.618301856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.619082 containerd[2180]: time="2026-01-23T23:55:37.618497588Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:37.755730 kubelet[3064]: E0123 23:55:37.751530 3064 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": dial tcp 172.31.18.35:6443: connect: connection refused" interval="1.6s" Jan 23 23:55:37.782062 kubelet[3064]: W0123 23:55:37.781963 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.18.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-35&limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:37.782202 kubelet[3064]: E0123 23:55:37.782078 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.18.35:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-35&limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:37.814812 containerd[2180]: time="2026-01-23T23:55:37.814745517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-35,Uid:948760d6256893c5a8499c1604f2e7f0,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa1119018954e4548032be63f9b8e753e8f0c83ea2168039b85a73ffcf64de91\"" Jan 23 23:55:37.818248 kubelet[3064]: W0123 23:55:37.818114 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.18.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:37.818474 kubelet[3064]: E0123 23:55:37.818282 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.18.35:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:37.823830 containerd[2180]: time="2026-01-23T23:55:37.823762389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-35,Uid:d1e8e03c42641968d7b0ff2bb8557412,Namespace:kube-system,Attempt:0,} returns sandbox id \"62a847d48104fb3f7f6acf08ca10c11185a245d3d8dcfe55905adca5f1a269d9\"" Jan 23 23:55:37.826043 containerd[2180]: time="2026-01-23T23:55:37.825945897Z" level=info msg="CreateContainer within sandbox \"aa1119018954e4548032be63f9b8e753e8f0c83ea2168039b85a73ffcf64de91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:55:37.844776 containerd[2180]: time="2026-01-23T23:55:37.844714930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-35,Uid:cf5d8bb9dc909505a8bfa246801a7d89,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba031e17d22029930748fc0e8ac7a1331195fe0b0fb746aee31fed550176ff9c\"" Jan 23 23:55:37.848803 containerd[2180]: time="2026-01-23T23:55:37.847784026Z" level=info msg="CreateContainer within sandbox \"62a847d48104fb3f7f6acf08ca10c11185a245d3d8dcfe55905adca5f1a269d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:55:37.858441 containerd[2180]: time="2026-01-23T23:55:37.858329338Z" level=info msg="CreateContainer within sandbox \"ba031e17d22029930748fc0e8ac7a1331195fe0b0fb746aee31fed550176ff9c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:55:37.864652 containerd[2180]: time="2026-01-23T23:55:37.864328498Z" level=info msg="CreateContainer within sandbox \"aa1119018954e4548032be63f9b8e753e8f0c83ea2168039b85a73ffcf64de91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1\"" Jan 23 23:55:37.865746 containerd[2180]: time="2026-01-23T23:55:37.865661818Z" level=info msg="StartContainer for \"9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1\"" Jan 23 23:55:37.898830 containerd[2180]: time="2026-01-23T23:55:37.898765030Z" level=info msg="CreateContainer within sandbox \"ba031e17d22029930748fc0e8ac7a1331195fe0b0fb746aee31fed550176ff9c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1\"" Jan 23 23:55:37.900147 containerd[2180]: time="2026-01-23T23:55:37.900074410Z" level=info msg="StartContainer for \"399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1\"" Jan 23 23:55:37.909294 containerd[2180]: time="2026-01-23T23:55:37.909149830Z" level=info msg="CreateContainer within sandbox \"62a847d48104fb3f7f6acf08ca10c11185a245d3d8dcfe55905adca5f1a269d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54f79b1c32b6ec4c3e2c642bb5f913dc373344519053a424fb270f3b679d623d\"" Jan 23 23:55:37.912476 containerd[2180]: time="2026-01-23T23:55:37.912151114Z" level=info msg="StartContainer for \"54f79b1c32b6ec4c3e2c642bb5f913dc373344519053a424fb270f3b679d623d\"" Jan 23 23:55:37.918821 kubelet[3064]: W0123 23:55:37.918627 3064 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.18.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.18.35:6443: connect: connection refused Jan 23 23:55:37.918821 kubelet[3064]: E0123 23:55:37.918742 3064 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.18.35:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.35:6443: connect: connection refused" logger="UnhandledError" Jan 23 23:55:37.966430 kubelet[3064]: I0123 23:55:37.964740 3064 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:37.969762 kubelet[3064]: E0123 23:55:37.969654 3064 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.35:6443/api/v1/nodes\": dial tcp 172.31.18.35:6443: connect: connection refused" node="ip-172-31-18-35" Jan 23 23:55:38.062023 containerd[2180]: time="2026-01-23T23:55:38.061826143Z" level=info msg="StartContainer for \"9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1\" returns successfully" Jan 23 23:55:38.179142 containerd[2180]: time="2026-01-23T23:55:38.178555783Z" level=info msg="StartContainer for \"54f79b1c32b6ec4c3e2c642bb5f913dc373344519053a424fb270f3b679d623d\" returns successfully" Jan 23 23:55:38.200408 containerd[2180]: time="2026-01-23T23:55:38.200302927Z" level=info msg="StartContainer for \"399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1\" returns successfully" Jan 23 23:55:38.447645 kubelet[3064]: E0123 23:55:38.447587 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:38.460944 kubelet[3064]: E0123 23:55:38.460654 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:38.471428 kubelet[3064]: E0123 23:55:38.469100 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:39.470038 kubelet[3064]: E0123 23:55:39.469981 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:39.473118 kubelet[3064]: E0123 23:55:39.473040 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:39.577422 kubelet[3064]: I0123 23:55:39.575489 3064 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:39.888535 update_engine[2137]: I20260123 23:55:39.888433 2137 update_attempter.cc:509] Updating boot flags... Jan 23 23:55:40.156584 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3348) Jan 23 23:55:40.904458 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3350) Jan 23 23:55:41.053410 kubelet[3064]: E0123 23:55:41.050011 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:41.603470 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3350) Jan 23 23:55:42.158811 kubelet[3064]: E0123 23:55:42.158741 3064 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:44.312972 kubelet[3064]: I0123 23:55:44.312918 3064 apiserver.go:52] "Watching apiserver" Jan 23 23:55:44.364932 kubelet[3064]: I0123 23:55:44.364854 3064 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:55:44.389439 kubelet[3064]: E0123 23:55:44.388047 3064 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-35\" not found" node="ip-172-31-18-35" Jan 23 23:55:44.588482 kubelet[3064]: I0123 23:55:44.586249 3064 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-35" Jan 23 23:55:44.644891 kubelet[3064]: I0123 23:55:44.644819 3064 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:44.780603 kubelet[3064]: E0123 23:55:44.780532 3064 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:44.780603 kubelet[3064]: I0123 23:55:44.780595 3064 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:44.805031 kubelet[3064]: E0123 23:55:44.804708 3064 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:44.805031 kubelet[3064]: I0123 23:55:44.804759 3064 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-35" Jan 23 23:55:44.812152 kubelet[3064]: E0123 23:55:44.812104 3064 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-35" Jan 23 23:55:44.842705 kubelet[3064]: I0123 23:55:44.840092 3064 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:44.860726 kubelet[3064]: E0123 23:55:44.860668 3064 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-35\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:47.004429 systemd[1]: Reloading requested from client PID 3604 ('systemctl') (unit session-7.scope)... Jan 23 23:55:47.004459 systemd[1]: Reloading... Jan 23 23:55:47.191479 zram_generator::config[3647]: No configuration found. Jan 23 23:55:47.620111 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:47.923610 systemd[1]: Reloading finished in 918 ms. Jan 23 23:55:47.995834 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:48.011046 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:55:48.011856 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:48.026559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:48.394794 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:48.414190 (kubelet)[3714]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:48.538611 kubelet[3714]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:48.538611 kubelet[3714]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:48.538611 kubelet[3714]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:48.540190 kubelet[3714]: I0123 23:55:48.538758 3714 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:48.559540 kubelet[3714]: I0123 23:55:48.558688 3714 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 23 23:55:48.559540 kubelet[3714]: I0123 23:55:48.558769 3714 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:48.559774 kubelet[3714]: I0123 23:55:48.559611 3714 server.go:954] "Client rotation is on, will bootstrap in background" Jan 23 23:55:48.566166 kubelet[3714]: I0123 23:55:48.566087 3714 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 23 23:55:48.571186 kubelet[3714]: I0123 23:55:48.571124 3714 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:48.585147 kubelet[3714]: E0123 23:55:48.584009 3714 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:48.585147 kubelet[3714]: I0123 23:55:48.584074 3714 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:48.590460 kubelet[3714]: I0123 23:55:48.590368 3714 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 23 23:55:48.591572 kubelet[3714]: I0123 23:55:48.591491 3714 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:48.592135 kubelet[3714]: I0123 23:55:48.591562 3714 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 23 23:55:48.592135 kubelet[3714]: I0123 23:55:48.591976 3714 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:48.592135 kubelet[3714]: I0123 23:55:48.591997 3714 container_manager_linux.go:304] "Creating device plugin manager" Jan 23 23:55:48.592135 kubelet[3714]: I0123 23:55:48.592067 3714 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:48.592510 kubelet[3714]: I0123 23:55:48.592310 3714 kubelet.go:446] "Attempting to sync node with API server" Jan 23 23:55:48.592510 kubelet[3714]: I0123 23:55:48.592331 3714 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:48.592510 kubelet[3714]: I0123 23:55:48.592360 3714 kubelet.go:352] "Adding apiserver pod source" Jan 23 23:55:48.592510 kubelet[3714]: I0123 23:55:48.592379 3714 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:48.631951 kubelet[3714]: I0123 23:55:48.629672 3714 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:48.631951 kubelet[3714]: I0123 23:55:48.630488 3714 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 23 23:55:48.631951 kubelet[3714]: I0123 23:55:48.631292 3714 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 23:55:48.631951 kubelet[3714]: I0123 23:55:48.631344 3714 server.go:1287] "Started kubelet" Jan 23 23:55:48.641574 kubelet[3714]: I0123 23:55:48.639435 3714 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:48.651059 kubelet[3714]: I0123 23:55:48.650582 3714 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:48.662715 kubelet[3714]: I0123 23:55:48.662634 3714 server.go:479] "Adding debug handlers to kubelet server" Jan 23 23:55:48.665270 kubelet[3714]: I0123 23:55:48.664204 3714 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 23:55:48.668068 kubelet[3714]: I0123 23:55:48.657186 3714 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:48.668068 kubelet[3714]: E0123 23:55:48.666645 3714 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-18-35\" not found" Jan 23 23:55:48.671503 kubelet[3714]: I0123 23:55:48.651172 3714 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:48.671713 kubelet[3714]: I0123 23:55:48.671689 3714 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 23:55:48.671826 kubelet[3714]: I0123 23:55:48.671789 3714 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:48.672106 kubelet[3714]: I0123 23:55:48.672087 3714 reconciler.go:26] "Reconciler: start to sync state" Jan 23 23:55:48.690156 kubelet[3714]: I0123 23:55:48.690031 3714 factory.go:221] Registration of the systemd container factory successfully Jan 23 23:55:48.690345 kubelet[3714]: I0123 23:55:48.690286 3714 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:48.695638 kubelet[3714]: E0123 23:55:48.695578 3714 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:48.699811 kubelet[3714]: I0123 23:55:48.699778 3714 factory.go:221] Registration of the containerd container factory successfully Jan 23 23:55:48.718848 kubelet[3714]: I0123 23:55:48.718597 3714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:48.721563 kubelet[3714]: I0123 23:55:48.720983 3714 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:48.721563 kubelet[3714]: I0123 23:55:48.721045 3714 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 23 23:55:48.721563 kubelet[3714]: I0123 23:55:48.721082 3714 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:48.721563 kubelet[3714]: I0123 23:55:48.721097 3714 kubelet.go:2382] "Starting kubelet main sync loop" Jan 23 23:55:48.721563 kubelet[3714]: E0123 23:55:48.721175 3714 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:48.821625 kubelet[3714]: E0123 23:55:48.821569 3714 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:55:48.896518 kubelet[3714]: I0123 23:55:48.896480 3714 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:48.897078 kubelet[3714]: I0123 23:55:48.896965 3714 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:48.897348 kubelet[3714]: I0123 23:55:48.897011 3714 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:48.898214 kubelet[3714]: I0123 23:55:48.897838 3714 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:55:48.898214 kubelet[3714]: I0123 23:55:48.897945 3714 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:55:48.898214 kubelet[3714]: I0123 23:55:48.898000 3714 policy_none.go:49] "None policy: Start" Jan 23 23:55:48.898214 kubelet[3714]: I0123 23:55:48.898039 3714 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 23:55:48.898214 kubelet[3714]: I0123 23:55:48.898072 3714 state_mem.go:35] "Initializing new in-memory state store" Jan 23 23:55:48.899431 kubelet[3714]: I0123 23:55:48.898901 3714 state_mem.go:75] "Updated machine memory state" Jan 23 23:55:48.904631 kubelet[3714]: I0123 23:55:48.904467 3714 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 23 23:55:48.905426 kubelet[3714]: I0123 23:55:48.905077 3714 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:48.905426 kubelet[3714]: I0123 23:55:48.905110 3714 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:48.907322 kubelet[3714]: I0123 23:55:48.906094 3714 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:48.923473 kubelet[3714]: E0123 23:55:48.921650 3714 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:49.024144 kubelet[3714]: I0123 23:55:49.024046 3714 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-35" Jan 23 23:55:49.024551 kubelet[3714]: I0123 23:55:49.024516 3714 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:49.025596 kubelet[3714]: I0123 23:55:49.025245 3714 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.037178 kubelet[3714]: I0123 23:55:49.037124 3714 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-35" Jan 23 23:55:49.070562 kubelet[3714]: I0123 23:55:49.070481 3714 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-35" Jan 23 23:55:49.071624 kubelet[3714]: I0123 23:55:49.070647 3714 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-35" Jan 23 23:55:49.087358 kubelet[3714]: I0123 23:55:49.087282 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/948760d6256893c5a8499c1604f2e7f0-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-35\" (UID: \"948760d6256893c5a8499c1604f2e7f0\") " pod="kube-system/kube-scheduler-ip-172-31-18-35" Jan 23 23:55:49.089376 kubelet[3714]: I0123 23:55:49.088250 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:49.089376 kubelet[3714]: I0123 23:55:49.088344 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:49.089376 kubelet[3714]: I0123 23:55:49.088428 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.089376 kubelet[3714]: I0123 23:55:49.088474 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.089376 kubelet[3714]: I0123 23:55:49.088516 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1e8e03c42641968d7b0ff2bb8557412-ca-certs\") pod \"kube-apiserver-ip-172-31-18-35\" (UID: \"d1e8e03c42641968d7b0ff2bb8557412\") " pod="kube-system/kube-apiserver-ip-172-31-18-35" Jan 23 23:55:49.089773 kubelet[3714]: I0123 23:55:49.088554 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.089773 kubelet[3714]: I0123 23:55:49.088590 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.089773 kubelet[3714]: I0123 23:55:49.088624 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cf5d8bb9dc909505a8bfa246801a7d89-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-35\" (UID: \"cf5d8bb9dc909505a8bfa246801a7d89\") " pod="kube-system/kube-controller-manager-ip-172-31-18-35" Jan 23 23:55:49.606666 kubelet[3714]: I0123 23:55:49.606148 3714 apiserver.go:52] "Watching apiserver" Jan 23 23:55:49.672534 kubelet[3714]: I0123 23:55:49.672448 3714 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 23:55:49.914777 kubelet[3714]: I0123 23:55:49.914675 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-35" podStartSLOduration=0.914650353 podStartE2EDuration="914.650353ms" podCreationTimestamp="2026-01-23 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:49.873864741 +0000 UTC m=+1.449058064" watchObservedRunningTime="2026-01-23 23:55:49.914650353 +0000 UTC m=+1.489843640" Jan 23 23:55:49.915862 kubelet[3714]: I0123 23:55:49.914869 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-35" podStartSLOduration=0.914858541 podStartE2EDuration="914.858541ms" podCreationTimestamp="2026-01-23 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:49.913667637 +0000 UTC m=+1.488860960" watchObservedRunningTime="2026-01-23 23:55:49.914858541 +0000 UTC m=+1.490051828" Jan 23 23:55:49.944297 kubelet[3714]: I0123 23:55:49.944201 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-35" podStartSLOduration=0.94417579 podStartE2EDuration="944.17579ms" podCreationTimestamp="2026-01-23 23:55:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:49.938224618 +0000 UTC m=+1.513417929" watchObservedRunningTime="2026-01-23 23:55:49.94417579 +0000 UTC m=+1.519369101" Jan 23 23:55:51.859207 kubelet[3714]: I0123 23:55:51.859141 3714 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:55:51.860363 kubelet[3714]: I0123 23:55:51.860202 3714 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:55:51.860479 containerd[2180]: time="2026-01-23T23:55:51.859699883Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:55:52.816358 kubelet[3714]: I0123 23:55:52.816260 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtpj\" (UniqueName: \"kubernetes.io/projected/d756c7fe-057c-4c89-bc59-c64c65849aaa-kube-api-access-sjtpj\") pod \"kube-proxy-dxlm7\" (UID: \"d756c7fe-057c-4c89-bc59-c64c65849aaa\") " pod="kube-system/kube-proxy-dxlm7" Jan 23 23:55:52.818795 kubelet[3714]: I0123 23:55:52.818492 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d756c7fe-057c-4c89-bc59-c64c65849aaa-kube-proxy\") pod \"kube-proxy-dxlm7\" (UID: \"d756c7fe-057c-4c89-bc59-c64c65849aaa\") " pod="kube-system/kube-proxy-dxlm7" Jan 23 23:55:52.818795 kubelet[3714]: I0123 23:55:52.818655 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d756c7fe-057c-4c89-bc59-c64c65849aaa-lib-modules\") pod \"kube-proxy-dxlm7\" (UID: \"d756c7fe-057c-4c89-bc59-c64c65849aaa\") " pod="kube-system/kube-proxy-dxlm7" Jan 23 23:55:52.818795 kubelet[3714]: I0123 23:55:52.818716 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d756c7fe-057c-4c89-bc59-c64c65849aaa-xtables-lock\") pod \"kube-proxy-dxlm7\" (UID: \"d756c7fe-057c-4c89-bc59-c64c65849aaa\") " pod="kube-system/kube-proxy-dxlm7" Jan 23 23:55:53.021073 kubelet[3714]: I0123 23:55:53.020901 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frgzw\" (UniqueName: \"kubernetes.io/projected/eaaa8e7e-357b-4899-ab59-c941d4d76757-kube-api-access-frgzw\") pod \"tigera-operator-7dcd859c48-ztdlc\" (UID: \"eaaa8e7e-357b-4899-ab59-c941d4d76757\") " pod="tigera-operator/tigera-operator-7dcd859c48-ztdlc" Jan 23 23:55:53.021073 kubelet[3714]: I0123 23:55:53.020989 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/eaaa8e7e-357b-4899-ab59-c941d4d76757-var-lib-calico\") pod \"tigera-operator-7dcd859c48-ztdlc\" (UID: \"eaaa8e7e-357b-4899-ab59-c941d4d76757\") " pod="tigera-operator/tigera-operator-7dcd859c48-ztdlc" Jan 23 23:55:53.083922 containerd[2180]: time="2026-01-23T23:55:53.083235009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxlm7,Uid:d756c7fe-057c-4c89-bc59-c64c65849aaa,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:53.136433 containerd[2180]: time="2026-01-23T23:55:53.136080645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:53.137509 containerd[2180]: time="2026-01-23T23:55:53.136187265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:53.137765 containerd[2180]: time="2026-01-23T23:55:53.137489806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:53.137923 containerd[2180]: time="2026-01-23T23:55:53.137804674Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:53.213549 containerd[2180]: time="2026-01-23T23:55:53.213486610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dxlm7,Uid:d756c7fe-057c-4c89-bc59-c64c65849aaa,Namespace:kube-system,Attempt:0,} returns sandbox id \"a76c82e184addcbf6aa55269929f655e863c90cc41db057a42bf9d4a802866eb\"" Jan 23 23:55:53.219858 containerd[2180]: time="2026-01-23T23:55:53.219798646Z" level=info msg="CreateContainer within sandbox \"a76c82e184addcbf6aa55269929f655e863c90cc41db057a42bf9d4a802866eb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:55:53.255879 containerd[2180]: time="2026-01-23T23:55:53.255784534Z" level=info msg="CreateContainer within sandbox \"a76c82e184addcbf6aa55269929f655e863c90cc41db057a42bf9d4a802866eb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab5885d69d18289a73fc9c04b2a6be4d6ca70b89ee73cb29c810c730abd22445\"" Jan 23 23:55:53.257876 containerd[2180]: time="2026-01-23T23:55:53.257655430Z" level=info msg="StartContainer for \"ab5885d69d18289a73fc9c04b2a6be4d6ca70b89ee73cb29c810c730abd22445\"" Jan 23 23:55:53.279759 containerd[2180]: time="2026-01-23T23:55:53.279534478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ztdlc,Uid:eaaa8e7e-357b-4899-ab59-c941d4d76757,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:55:53.347814 containerd[2180]: time="2026-01-23T23:55:53.347376683Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:53.347814 containerd[2180]: time="2026-01-23T23:55:53.347567207Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:53.347814 containerd[2180]: time="2026-01-23T23:55:53.347616227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:53.348107 containerd[2180]: time="2026-01-23T23:55:53.347830859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:53.420724 containerd[2180]: time="2026-01-23T23:55:53.420638735Z" level=info msg="StartContainer for \"ab5885d69d18289a73fc9c04b2a6be4d6ca70b89ee73cb29c810c730abd22445\" returns successfully" Jan 23 23:55:53.459051 containerd[2180]: time="2026-01-23T23:55:53.459000707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-ztdlc,Uid:eaaa8e7e-357b-4899-ab59-c941d4d76757,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"cca128d04984ed3a947f0a961292beb96882dc02f565dbf373501cc631e26fb7\"" Jan 23 23:55:53.465008 containerd[2180]: time="2026-01-23T23:55:53.464924147Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:55:53.964908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2593347107.mount: Deactivated successfully. Jan 23 23:55:54.642142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1332318847.mount: Deactivated successfully. Jan 23 23:55:55.573382 containerd[2180]: time="2026-01-23T23:55:55.573273974Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.575651 containerd[2180]: time="2026-01-23T23:55:55.575579498Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:55:55.577749 containerd[2180]: time="2026-01-23T23:55:55.577561646Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.585831 containerd[2180]: time="2026-01-23T23:55:55.585720614Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:55.587907 containerd[2180]: time="2026-01-23T23:55:55.587672486Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.122448339s" Jan 23 23:55:55.587907 containerd[2180]: time="2026-01-23T23:55:55.587741558Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:55:55.595539 containerd[2180]: time="2026-01-23T23:55:55.595096346Z" level=info msg="CreateContainer within sandbox \"cca128d04984ed3a947f0a961292beb96882dc02f565dbf373501cc631e26fb7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:55:55.625670 containerd[2180]: time="2026-01-23T23:55:55.625578566Z" level=info msg="CreateContainer within sandbox \"cca128d04984ed3a947f0a961292beb96882dc02f565dbf373501cc631e26fb7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c\"" Jan 23 23:55:55.628227 containerd[2180]: time="2026-01-23T23:55:55.626525390Z" level=info msg="StartContainer for \"3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c\"" Jan 23 23:55:55.748988 containerd[2180]: time="2026-01-23T23:55:55.748885310Z" level=info msg="StartContainer for \"3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c\" returns successfully" Jan 23 23:55:55.831826 kubelet[3714]: I0123 23:55:55.830004 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dxlm7" podStartSLOduration=3.829979211 podStartE2EDuration="3.829979211s" podCreationTimestamp="2026-01-23 23:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:53.818549209 +0000 UTC m=+5.393742508" watchObservedRunningTime="2026-01-23 23:55:55.829979211 +0000 UTC m=+7.405172498" Jan 23 23:55:58.445584 kubelet[3714]: I0123 23:55:58.444818 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-ztdlc" podStartSLOduration=4.317101549 podStartE2EDuration="6.444783652s" podCreationTimestamp="2026-01-23 23:55:52 +0000 UTC" firstStartedPulling="2026-01-23 23:55:53.462354779 +0000 UTC m=+5.037548054" lastFinishedPulling="2026-01-23 23:55:55.59003687 +0000 UTC m=+7.165230157" observedRunningTime="2026-01-23 23:55:55.834020367 +0000 UTC m=+7.409213678" watchObservedRunningTime="2026-01-23 23:55:58.444783652 +0000 UTC m=+10.019976963" Jan 23 23:56:02.622621 sudo[2535]: pam_unix(sudo:session): session closed for user root Jan 23 23:56:02.710560 sshd[2512]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:02.730717 systemd[1]: sshd@6-172.31.18.35:22-4.153.228.146:39140.service: Deactivated successfully. Jan 23 23:56:02.745870 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:56:02.752470 systemd-logind[2134]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:56:02.761633 systemd-logind[2134]: Removed session 7. Jan 23 23:56:23.869950 kubelet[3714]: I0123 23:56:23.869620 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/3af23f65-4331-4e0b-b691-b9395dd71786-typha-certs\") pod \"calico-typha-54cf996647-8mk85\" (UID: \"3af23f65-4331-4e0b-b691-b9395dd71786\") " pod="calico-system/calico-typha-54cf996647-8mk85" Jan 23 23:56:23.869950 kubelet[3714]: I0123 23:56:23.869707 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3af23f65-4331-4e0b-b691-b9395dd71786-tigera-ca-bundle\") pod \"calico-typha-54cf996647-8mk85\" (UID: \"3af23f65-4331-4e0b-b691-b9395dd71786\") " pod="calico-system/calico-typha-54cf996647-8mk85" Jan 23 23:56:23.869950 kubelet[3714]: I0123 23:56:23.869793 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7h27\" (UniqueName: \"kubernetes.io/projected/3af23f65-4331-4e0b-b691-b9395dd71786-kube-api-access-b7h27\") pod \"calico-typha-54cf996647-8mk85\" (UID: \"3af23f65-4331-4e0b-b691-b9395dd71786\") " pod="calico-system/calico-typha-54cf996647-8mk85" Jan 23 23:56:24.067303 containerd[2180]: time="2026-01-23T23:56:24.066694611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cf996647-8mk85,Uid:3af23f65-4331-4e0b-b691-b9395dd71786,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:24.070503 kubelet[3714]: I0123 23:56:24.070428 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32255686-43b0-4d33-8b7a-91fbbef19a73-node-certs\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070503 kubelet[3714]: I0123 23:56:24.070507 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-cni-log-dir\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070716 kubelet[3714]: I0123 23:56:24.070552 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-var-run-calico\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070716 kubelet[3714]: I0123 23:56:24.070588 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-lib-modules\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070716 kubelet[3714]: I0123 23:56:24.070624 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-xtables-lock\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070716 kubelet[3714]: I0123 23:56:24.070665 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-cni-net-dir\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070716 kubelet[3714]: I0123 23:56:24.070706 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-var-lib-calico\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070977 kubelet[3714]: I0123 23:56:24.070744 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32255686-43b0-4d33-8b7a-91fbbef19a73-tigera-ca-bundle\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070977 kubelet[3714]: I0123 23:56:24.070778 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-flexvol-driver-host\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070977 kubelet[3714]: I0123 23:56:24.070815 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-cni-bin-dir\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070977 kubelet[3714]: I0123 23:56:24.070850 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxcdg\" (UniqueName: \"kubernetes.io/projected/32255686-43b0-4d33-8b7a-91fbbef19a73-kube-api-access-nxcdg\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.070977 kubelet[3714]: I0123 23:56:24.070888 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32255686-43b0-4d33-8b7a-91fbbef19a73-policysync\") pod \"calico-node-xn285\" (UID: \"32255686-43b0-4d33-8b7a-91fbbef19a73\") " pod="calico-system/calico-node-xn285" Jan 23 23:56:24.113597 kubelet[3714]: E0123 23:56:24.112936 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:24.144651 containerd[2180]: time="2026-01-23T23:56:24.142017808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:24.145468 containerd[2180]: time="2026-01-23T23:56:24.144965224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:24.145468 containerd[2180]: time="2026-01-23T23:56:24.145021408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.150284 containerd[2180]: time="2026-01-23T23:56:24.149909500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.175507 kubelet[3714]: I0123 23:56:24.171858 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpgpx\" (UniqueName: \"kubernetes.io/projected/ef69f672-ed17-43f4-a4a8-8456f661673c-kube-api-access-vpgpx\") pod \"csi-node-driver-md8cr\" (UID: \"ef69f672-ed17-43f4-a4a8-8456f661673c\") " pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:24.175507 kubelet[3714]: I0123 23:56:24.172344 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ef69f672-ed17-43f4-a4a8-8456f661673c-registration-dir\") pod \"csi-node-driver-md8cr\" (UID: \"ef69f672-ed17-43f4-a4a8-8456f661673c\") " pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:24.175507 kubelet[3714]: I0123 23:56:24.172564 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ef69f672-ed17-43f4-a4a8-8456f661673c-socket-dir\") pod \"csi-node-driver-md8cr\" (UID: \"ef69f672-ed17-43f4-a4a8-8456f661673c\") " pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:24.175507 kubelet[3714]: I0123 23:56:24.172979 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef69f672-ed17-43f4-a4a8-8456f661673c-kubelet-dir\") pod \"csi-node-driver-md8cr\" (UID: \"ef69f672-ed17-43f4-a4a8-8456f661673c\") " pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:24.179243 kubelet[3714]: I0123 23:56:24.178497 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ef69f672-ed17-43f4-a4a8-8456f661673c-varrun\") pod \"csi-node-driver-md8cr\" (UID: \"ef69f672-ed17-43f4-a4a8-8456f661673c\") " pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:24.203259 kubelet[3714]: E0123 23:56:24.203195 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.205680 kubelet[3714]: W0123 23:56:24.205491 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.205813 kubelet[3714]: E0123 23:56:24.205675 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.287275 kubelet[3714]: E0123 23:56:24.284749 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.287275 kubelet[3714]: W0123 23:56:24.284797 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.287275 kubelet[3714]: E0123 23:56:24.284850 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.287647 kubelet[3714]: E0123 23:56:24.287587 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.287647 kubelet[3714]: W0123 23:56:24.287617 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.287800 kubelet[3714]: E0123 23:56:24.287649 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.295848 kubelet[3714]: E0123 23:56:24.289375 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.295848 kubelet[3714]: W0123 23:56:24.289436 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.295848 kubelet[3714]: E0123 23:56:24.289474 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.295848 kubelet[3714]: E0123 23:56:24.295309 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.295848 kubelet[3714]: W0123 23:56:24.295352 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.295848 kubelet[3714]: E0123 23:56:24.295409 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.299088 kubelet[3714]: E0123 23:56:24.299052 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.299366 kubelet[3714]: W0123 23:56:24.299337 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.301234 kubelet[3714]: E0123 23:56:24.301103 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.303538 kubelet[3714]: E0123 23:56:24.302718 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.303775 kubelet[3714]: W0123 23:56:24.303729 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.304499 kubelet[3714]: E0123 23:56:24.304462 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.306726 kubelet[3714]: E0123 23:56:24.306678 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.307635 kubelet[3714]: W0123 23:56:24.306898 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.309842 kubelet[3714]: E0123 23:56:24.309471 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.309842 kubelet[3714]: E0123 23:56:24.309694 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.309842 kubelet[3714]: W0123 23:56:24.309716 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.309842 kubelet[3714]: E0123 23:56:24.309820 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.311743 kubelet[3714]: E0123 23:56:24.311574 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.311743 kubelet[3714]: W0123 23:56:24.311608 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.315627 kubelet[3714]: E0123 23:56:24.314283 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.318427 kubelet[3714]: E0123 23:56:24.317069 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.318427 kubelet[3714]: W0123 23:56:24.317105 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.318427 kubelet[3714]: E0123 23:56:24.317176 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.319862 kubelet[3714]: E0123 23:56:24.319648 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.319862 kubelet[3714]: W0123 23:56:24.319681 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.319862 kubelet[3714]: E0123 23:56:24.319752 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.322340 kubelet[3714]: E0123 23:56:24.322206 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.322340 kubelet[3714]: W0123 23:56:24.322266 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.322340 kubelet[3714]: E0123 23:56:24.322334 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.323608 kubelet[3714]: E0123 23:56:24.323184 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.323608 kubelet[3714]: W0123 23:56:24.323240 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.323608 kubelet[3714]: E0123 23:56:24.323558 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.323857 kubelet[3714]: E0123 23:56:24.323831 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.327128 kubelet[3714]: W0123 23:56:24.323854 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.327128 kubelet[3714]: E0123 23:56:24.324975 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.327128 kubelet[3714]: W0123 23:56:24.325000 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.327128 kubelet[3714]: E0123 23:56:24.325146 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.327128 kubelet[3714]: E0123 23:56:24.325177 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.330604 kubelet[3714]: E0123 23:56:24.329627 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.330604 kubelet[3714]: W0123 23:56:24.329663 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.330824 kubelet[3714]: E0123 23:56:24.330696 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.330824 kubelet[3714]: W0123 23:56:24.330727 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.331100 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.333449 kubelet[3714]: W0123 23:56:24.331131 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.331510 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.333449 kubelet[3714]: W0123 23:56:24.331541 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.332029 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.333449 kubelet[3714]: W0123 23:56:24.332051 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.332080 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.332847 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.333449 kubelet[3714]: W0123 23:56:24.332876 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.333449 kubelet[3714]: E0123 23:56:24.332907 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.332967 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.333322 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.334076 kubelet[3714]: W0123 23:56:24.333340 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.333362 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.333769 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.334076 kubelet[3714]: W0123 23:56:24.333787 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.333808 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.334076 kubelet[3714]: E0123 23:56:24.333845 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.334190 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.337103 kubelet[3714]: W0123 23:56:24.334209 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.334230 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.334264 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.334896 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.335938 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.337103 kubelet[3714]: W0123 23:56:24.335963 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.337103 kubelet[3714]: E0123 23:56:24.336010 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.339481 kubelet[3714]: E0123 23:56:24.338131 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.339481 kubelet[3714]: W0123 23:56:24.338162 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.339481 kubelet[3714]: E0123 23:56:24.338194 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.359892 kubelet[3714]: E0123 23:56:24.359769 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:24.359892 kubelet[3714]: W0123 23:56:24.359802 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:24.359892 kubelet[3714]: E0123 23:56:24.359833 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:24.392028 containerd[2180]: time="2026-01-23T23:56:24.391865069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-54cf996647-8mk85,Uid:3af23f65-4331-4e0b-b691-b9395dd71786,Namespace:calico-system,Attempt:0,} returns sandbox id \"8fbd81cae1c5366aed3e5192a43489ce88316d76ecdbf1c090a97761564098a8\"" Jan 23 23:56:24.395902 containerd[2180]: time="2026-01-23T23:56:24.395704985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:56:24.590571 containerd[2180]: time="2026-01-23T23:56:24.590497350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xn285,Uid:32255686-43b0-4d33-8b7a-91fbbef19a73,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:24.625509 containerd[2180]: time="2026-01-23T23:56:24.625305186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:24.625509 containerd[2180]: time="2026-01-23T23:56:24.625432770Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:24.625762 containerd[2180]: time="2026-01-23T23:56:24.625646826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.627310 containerd[2180]: time="2026-01-23T23:56:24.627152214Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:24.694170 containerd[2180]: time="2026-01-23T23:56:24.693974550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xn285,Uid:32255686-43b0-4d33-8b7a-91fbbef19a73,Namespace:calico-system,Attempt:0,} returns sandbox id \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\"" Jan 23 23:56:25.726137 kubelet[3714]: E0123 23:56:25.724785 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:25.806836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount834861421.mount: Deactivated successfully. Jan 23 23:56:26.665362 containerd[2180]: time="2026-01-23T23:56:26.665231624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.667859 containerd[2180]: time="2026-01-23T23:56:26.667774784Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:56:26.669344 containerd[2180]: time="2026-01-23T23:56:26.669252860Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.673467 containerd[2180]: time="2026-01-23T23:56:26.672811076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:26.675928 containerd[2180]: time="2026-01-23T23:56:26.674606456Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.278803839s" Jan 23 23:56:26.675928 containerd[2180]: time="2026-01-23T23:56:26.674687600Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:56:26.678245 containerd[2180]: time="2026-01-23T23:56:26.676488356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:56:26.706281 containerd[2180]: time="2026-01-23T23:56:26.706204124Z" level=info msg="CreateContainer within sandbox \"8fbd81cae1c5366aed3e5192a43489ce88316d76ecdbf1c090a97761564098a8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:56:26.735920 containerd[2180]: time="2026-01-23T23:56:26.735857684Z" level=info msg="CreateContainer within sandbox \"8fbd81cae1c5366aed3e5192a43489ce88316d76ecdbf1c090a97761564098a8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"ffeef8d645487006e7d549b921a6e7e8f56dccc366a3be2dd353d00ddcc3af5a\"" Jan 23 23:56:26.738333 containerd[2180]: time="2026-01-23T23:56:26.738281288Z" level=info msg="StartContainer for \"ffeef8d645487006e7d549b921a6e7e8f56dccc366a3be2dd353d00ddcc3af5a\"" Jan 23 23:56:26.879226 containerd[2180]: time="2026-01-23T23:56:26.879144021Z" level=info msg="StartContainer for \"ffeef8d645487006e7d549b921a6e7e8f56dccc366a3be2dd353d00ddcc3af5a\" returns successfully" Jan 23 23:56:26.996142 kubelet[3714]: I0123 23:56:26.995043 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-54cf996647-8mk85" podStartSLOduration=1.713220823 podStartE2EDuration="3.995017738s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="2026-01-23 23:56:24.394311437 +0000 UTC m=+35.969504724" lastFinishedPulling="2026-01-23 23:56:26.67610834 +0000 UTC m=+38.251301639" observedRunningTime="2026-01-23 23:56:26.99447145 +0000 UTC m=+38.569665121" watchObservedRunningTime="2026-01-23 23:56:26.995017738 +0000 UTC m=+38.570211037" Jan 23 23:56:27.043344 kubelet[3714]: E0123 23:56:27.043278 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.043344 kubelet[3714]: W0123 23:56:27.043323 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.043344 kubelet[3714]: E0123 23:56:27.043358 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.045033 kubelet[3714]: E0123 23:56:27.044984 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.046422 kubelet[3714]: W0123 23:56:27.045023 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.046422 kubelet[3714]: E0123 23:56:27.045099 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.047525 kubelet[3714]: E0123 23:56:27.047358 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.055563 kubelet[3714]: W0123 23:56:27.055501 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.055563 kubelet[3714]: E0123 23:56:27.055564 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.057488 kubelet[3714]: E0123 23:56:27.057432 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.057488 kubelet[3714]: W0123 23:56:27.057516 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.058080 kubelet[3714]: E0123 23:56:27.057552 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.059495 kubelet[3714]: E0123 23:56:27.059378 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.059495 kubelet[3714]: W0123 23:56:27.059456 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.059495 kubelet[3714]: E0123 23:56:27.059490 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.061608 kubelet[3714]: E0123 23:56:27.061552 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.061608 kubelet[3714]: W0123 23:56:27.061591 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.062346 kubelet[3714]: E0123 23:56:27.061625 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.063621 kubelet[3714]: E0123 23:56:27.063566 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.063621 kubelet[3714]: W0123 23:56:27.063607 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.064184 kubelet[3714]: E0123 23:56:27.063641 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.068548 kubelet[3714]: E0123 23:56:27.068493 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.068548 kubelet[3714]: W0123 23:56:27.068536 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.068744 kubelet[3714]: E0123 23:56:27.068571 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.071879 kubelet[3714]: E0123 23:56:27.071806 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.071879 kubelet[3714]: W0123 23:56:27.071849 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.072089 kubelet[3714]: E0123 23:56:27.071885 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.075538 kubelet[3714]: E0123 23:56:27.075477 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.075538 kubelet[3714]: W0123 23:56:27.075517 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.076459 kubelet[3714]: E0123 23:56:27.075553 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.077973 kubelet[3714]: E0123 23:56:27.077918 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.077973 kubelet[3714]: W0123 23:56:27.077962 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.078258 kubelet[3714]: E0123 23:56:27.077999 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.082076 kubelet[3714]: E0123 23:56:27.082020 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.082076 kubelet[3714]: W0123 23:56:27.082064 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.082076 kubelet[3714]: E0123 23:56:27.082099 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.085687 kubelet[3714]: E0123 23:56:27.085600 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.085687 kubelet[3714]: W0123 23:56:27.085644 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.085687 kubelet[3714]: E0123 23:56:27.085678 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.090442 kubelet[3714]: E0123 23:56:27.090369 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.090442 kubelet[3714]: W0123 23:56:27.090432 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.090775 kubelet[3714]: E0123 23:56:27.090468 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.092809 kubelet[3714]: E0123 23:56:27.092755 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.092809 kubelet[3714]: W0123 23:56:27.092797 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.093741 kubelet[3714]: E0123 23:56:27.092849 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.125422 kubelet[3714]: E0123 23:56:27.124554 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.125422 kubelet[3714]: W0123 23:56:27.124593 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.125422 kubelet[3714]: E0123 23:56:27.124627 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.126676 kubelet[3714]: E0123 23:56:27.126493 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.126676 kubelet[3714]: W0123 23:56:27.126530 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.126676 kubelet[3714]: E0123 23:56:27.126577 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.127309 kubelet[3714]: E0123 23:56:27.127247 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.127309 kubelet[3714]: W0123 23:56:27.127287 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.127309 kubelet[3714]: E0123 23:56:27.127333 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.129214 kubelet[3714]: E0123 23:56:27.129001 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.129214 kubelet[3714]: W0123 23:56:27.129042 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.129214 kubelet[3714]: E0123 23:56:27.129091 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.129559 kubelet[3714]: E0123 23:56:27.129484 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.129559 kubelet[3714]: W0123 23:56:27.129505 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.129559 kubelet[3714]: E0123 23:56:27.129530 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.131308 kubelet[3714]: E0123 23:56:27.129907 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.131308 kubelet[3714]: W0123 23:56:27.129927 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.131308 kubelet[3714]: E0123 23:56:27.129952 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.139702 kubelet[3714]: E0123 23:56:27.134356 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.139702 kubelet[3714]: W0123 23:56:27.134415 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.139702 kubelet[3714]: E0123 23:56:27.134453 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.148143 kubelet[3714]: E0123 23:56:27.148002 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.148734 kubelet[3714]: W0123 23:56:27.148629 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.149252 kubelet[3714]: E0123 23:56:27.149106 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.152681 kubelet[3714]: E0123 23:56:27.152603 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.152681 kubelet[3714]: W0123 23:56:27.152660 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.152924 kubelet[3714]: E0123 23:56:27.152704 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.155641 kubelet[3714]: E0123 23:56:27.155157 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.155641 kubelet[3714]: W0123 23:56:27.155204 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.155641 kubelet[3714]: E0123 23:56:27.155258 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.158067 kubelet[3714]: E0123 23:56:27.157661 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.158067 kubelet[3714]: W0123 23:56:27.157706 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.158067 kubelet[3714]: E0123 23:56:27.157755 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.159873 kubelet[3714]: E0123 23:56:27.159623 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.159873 kubelet[3714]: W0123 23:56:27.159670 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.159873 kubelet[3714]: E0123 23:56:27.159728 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.162624 kubelet[3714]: E0123 23:56:27.162560 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.162624 kubelet[3714]: W0123 23:56:27.162604 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.162852 kubelet[3714]: E0123 23:56:27.162640 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.172655 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.180226 kubelet[3714]: W0123 23:56:27.172723 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.172763 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.174771 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.180226 kubelet[3714]: W0123 23:56:27.174805 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.175363 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.178468 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.180226 kubelet[3714]: W0123 23:56:27.178532 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.180226 kubelet[3714]: E0123 23:56:27.178595 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.184419 kubelet[3714]: E0123 23:56:27.182648 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.184419 kubelet[3714]: W0123 23:56:27.182723 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.184419 kubelet[3714]: E0123 23:56:27.182786 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.184419 kubelet[3714]: E0123 23:56:27.184046 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:27.184419 kubelet[3714]: W0123 23:56:27.184077 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:27.184419 kubelet[3714]: E0123 23:56:27.184111 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:27.722645 kubelet[3714]: E0123 23:56:27.721631 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:27.911514 containerd[2180]: time="2026-01-23T23:56:27.911436466Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:27.913879 containerd[2180]: time="2026-01-23T23:56:27.913511074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:56:27.915759 containerd[2180]: time="2026-01-23T23:56:27.915337978Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:27.919517 containerd[2180]: time="2026-01-23T23:56:27.919461118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:27.921166 containerd[2180]: time="2026-01-23T23:56:27.921107674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.244551218s" Jan 23 23:56:27.921521 containerd[2180]: time="2026-01-23T23:56:27.921339394Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:56:27.928087 containerd[2180]: time="2026-01-23T23:56:27.927698626Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:56:27.954821 containerd[2180]: time="2026-01-23T23:56:27.954725278Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5\"" Jan 23 23:56:27.956900 containerd[2180]: time="2026-01-23T23:56:27.956705698Z" level=info msg="StartContainer for \"ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5\"" Jan 23 23:56:28.001999 kubelet[3714]: E0123 23:56:28.001835 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.001999 kubelet[3714]: W0123 23:56:28.001879 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.001999 kubelet[3714]: E0123 23:56:28.001915 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.002372 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.008776 kubelet[3714]: W0123 23:56:28.002434 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.002519 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.002968 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.008776 kubelet[3714]: W0123 23:56:28.002995 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.003061 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.005046 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.008776 kubelet[3714]: W0123 23:56:28.005125 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.008776 kubelet[3714]: E0123 23:56:28.005276 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.016792 kubelet[3714]: E0123 23:56:28.009559 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.016792 kubelet[3714]: W0123 23:56:28.009593 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.016792 kubelet[3714]: E0123 23:56:28.009626 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.020453 kubelet[3714]: E0123 23:56:28.019052 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.020453 kubelet[3714]: W0123 23:56:28.019100 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.020453 kubelet[3714]: E0123 23:56:28.019135 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.022302 kubelet[3714]: E0123 23:56:28.021327 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.022302 kubelet[3714]: W0123 23:56:28.021381 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.022302 kubelet[3714]: E0123 23:56:28.021440 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.026682 kubelet[3714]: E0123 23:56:28.025591 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.026682 kubelet[3714]: W0123 23:56:28.025631 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.026682 kubelet[3714]: E0123 23:56:28.025665 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.026682 kubelet[3714]: E0123 23:56:28.026417 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.026682 kubelet[3714]: W0123 23:56:28.026446 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.026682 kubelet[3714]: E0123 23:56:28.026475 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.032938 kubelet[3714]: E0123 23:56:28.032768 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.032938 kubelet[3714]: W0123 23:56:28.032824 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.032938 kubelet[3714]: E0123 23:56:28.032858 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.035209 kubelet[3714]: E0123 23:56:28.034522 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.035209 kubelet[3714]: W0123 23:56:28.034551 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.035209 kubelet[3714]: E0123 23:56:28.034583 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.037221 kubelet[3714]: E0123 23:56:28.035830 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.037221 kubelet[3714]: W0123 23:56:28.035874 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.037221 kubelet[3714]: E0123 23:56:28.035910 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.040862 kubelet[3714]: E0123 23:56:28.040695 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.040862 kubelet[3714]: W0123 23:56:28.040735 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.040862 kubelet[3714]: E0123 23:56:28.040769 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.045489 kubelet[3714]: E0123 23:56:28.044006 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.045489 kubelet[3714]: W0123 23:56:28.044049 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.047452 kubelet[3714]: E0123 23:56:28.044487 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.054815 kubelet[3714]: E0123 23:56:28.054636 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.055922 kubelet[3714]: W0123 23:56:28.054677 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.055922 kubelet[3714]: E0123 23:56:28.055176 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.057757 kubelet[3714]: E0123 23:56:28.057701 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.058157 kubelet[3714]: W0123 23:56:28.058126 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.059124 kubelet[3714]: E0123 23:56:28.058372 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.060177 kubelet[3714]: E0123 23:56:28.060143 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.060445 kubelet[3714]: W0123 23:56:28.060418 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.060696 kubelet[3714]: E0123 23:56:28.060671 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.062404 kubelet[3714]: E0123 23:56:28.062143 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.062404 kubelet[3714]: W0123 23:56:28.062181 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.065001 kubelet[3714]: E0123 23:56:28.063597 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.065001 kubelet[3714]: E0123 23:56:28.064718 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.065001 kubelet[3714]: W0123 23:56:28.064744 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.065001 kubelet[3714]: E0123 23:56:28.064779 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.067422 kubelet[3714]: E0123 23:56:28.066940 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.078944 kubelet[3714]: W0123 23:56:28.078869 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.078944 kubelet[3714]: E0123 23:56:28.078944 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.081191 kubelet[3714]: E0123 23:56:28.080786 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.081191 kubelet[3714]: W0123 23:56:28.081183 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.084624 kubelet[3714]: E0123 23:56:28.084358 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.092254 kubelet[3714]: E0123 23:56:28.092191 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.092254 kubelet[3714]: W0123 23:56:28.092236 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.095538 kubelet[3714]: E0123 23:56:28.095065 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.095538 kubelet[3714]: W0123 23:56:28.095531 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.096732 kubelet[3714]: E0123 23:56:28.096694 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.097062 kubelet[3714]: E0123 23:56:28.097014 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.098451 kubelet[3714]: E0123 23:56:28.098343 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.098451 kubelet[3714]: W0123 23:56:28.098401 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.098666 kubelet[3714]: E0123 23:56:28.098457 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.101056 kubelet[3714]: E0123 23:56:28.100355 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.101056 kubelet[3714]: W0123 23:56:28.100405 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.101585 kubelet[3714]: E0123 23:56:28.101280 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.103307 kubelet[3714]: E0123 23:56:28.103235 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.103928 kubelet[3714]: W0123 23:56:28.103271 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.105463 kubelet[3714]: E0123 23:56:28.105068 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.105463 kubelet[3714]: W0123 23:56:28.105120 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.106095 kubelet[3714]: E0123 23:56:28.106017 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.106095 kubelet[3714]: W0123 23:56:28.106048 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.106414 kubelet[3714]: E0123 23:56:28.106199 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.106690 kubelet[3714]: E0123 23:56:28.106475 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.107373 kubelet[3714]: E0123 23:56:28.107343 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.107373 kubelet[3714]: W0123 23:56:28.107449 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.107373 kubelet[3714]: E0123 23:56:28.107483 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.108734 kubelet[3714]: E0123 23:56:28.108644 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.109271 kubelet[3714]: W0123 23:56:28.109238 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.109881 kubelet[3714]: E0123 23:56:28.109532 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.109881 kubelet[3714]: E0123 23:56:28.109040 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.110761 kubelet[3714]: E0123 23:56:28.110675 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.110761 kubelet[3714]: W0123 23:56:28.110709 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.111158 kubelet[3714]: E0123 23:56:28.110989 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.112361 kubelet[3714]: E0123 23:56:28.111813 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.112361 kubelet[3714]: W0123 23:56:28.111843 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.112361 kubelet[3714]: E0123 23:56:28.111872 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.113621 kubelet[3714]: E0123 23:56:28.113590 3714 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:56:28.113872 kubelet[3714]: W0123 23:56:28.113762 3714 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:56:28.113872 kubelet[3714]: E0123 23:56:28.113797 3714 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:56:28.165470 containerd[2180]: time="2026-01-23T23:56:28.165243163Z" level=info msg="StartContainer for \"ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5\" returns successfully" Jan 23 23:56:28.364565 containerd[2180]: time="2026-01-23T23:56:28.364365956Z" level=info msg="shim disconnected" id=ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5 namespace=k8s.io Jan 23 23:56:28.366917 containerd[2180]: time="2026-01-23T23:56:28.366440624Z" level=warning msg="cleaning up after shim disconnected" id=ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5 namespace=k8s.io Jan 23 23:56:28.366917 containerd[2180]: time="2026-01-23T23:56:28.366490628Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:28.689086 systemd[1]: run-containerd-runc-k8s.io-ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5-runc.GpYGyK.mount: Deactivated successfully. Jan 23 23:56:28.689368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea7c422f447e3f242430836c442db5bea36721cfac2cc6862eec1a8388b282f5-rootfs.mount: Deactivated successfully. Jan 23 23:56:28.985525 containerd[2180]: time="2026-01-23T23:56:28.982801932Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:56:29.721755 kubelet[3714]: E0123 23:56:29.721639 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:31.722497 kubelet[3714]: E0123 23:56:31.722377 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:31.889975 containerd[2180]: time="2026-01-23T23:56:31.889889210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:31.892941 containerd[2180]: time="2026-01-23T23:56:31.892868846Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:56:31.895720 containerd[2180]: time="2026-01-23T23:56:31.895647422Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:31.901747 containerd[2180]: time="2026-01-23T23:56:31.901661678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:31.904189 containerd[2180]: time="2026-01-23T23:56:31.903213170Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.920352798s" Jan 23 23:56:31.904189 containerd[2180]: time="2026-01-23T23:56:31.903278678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:56:31.909458 containerd[2180]: time="2026-01-23T23:56:31.909378398Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:56:31.942894 containerd[2180]: time="2026-01-23T23:56:31.942840338Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055\"" Jan 23 23:56:31.944114 containerd[2180]: time="2026-01-23T23:56:31.944031026Z" level=info msg="StartContainer for \"946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055\"" Jan 23 23:56:32.077336 containerd[2180]: time="2026-01-23T23:56:32.077167511Z" level=info msg="StartContainer for \"946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055\" returns successfully" Jan 23 23:56:33.240614 kubelet[3714]: I0123 23:56:33.240547 3714 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 23 23:56:33.267722 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055-rootfs.mount: Deactivated successfully. Jan 23 23:56:33.309166 kubelet[3714]: I0123 23:56:33.308946 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27rxw\" (UniqueName: \"kubernetes.io/projected/e5b6c2c1-0276-4f5f-9587-f464f0aab16d-kube-api-access-27rxw\") pod \"coredns-668d6bf9bc-9ddld\" (UID: \"e5b6c2c1-0276-4f5f-9587-f464f0aab16d\") " pod="kube-system/coredns-668d6bf9bc-9ddld" Jan 23 23:56:33.309166 kubelet[3714]: I0123 23:56:33.309027 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e5b6c2c1-0276-4f5f-9587-f464f0aab16d-config-volume\") pod \"coredns-668d6bf9bc-9ddld\" (UID: \"e5b6c2c1-0276-4f5f-9587-f464f0aab16d\") " pod="kube-system/coredns-668d6bf9bc-9ddld" Jan 23 23:56:33.513351 kubelet[3714]: I0123 23:56:33.511292 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/062e26d7-bfb2-4194-8340-6fddf424a2ce-goldmane-ca-bundle\") pod \"goldmane-666569f655-qtwrz\" (UID: \"062e26d7-bfb2-4194-8340-6fddf424a2ce\") " pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:33.513351 kubelet[3714]: I0123 23:56:33.511361 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdmj6\" (UniqueName: \"kubernetes.io/projected/062e26d7-bfb2-4194-8340-6fddf424a2ce-kube-api-access-tdmj6\") pod \"goldmane-666569f655-qtwrz\" (UID: \"062e26d7-bfb2-4194-8340-6fddf424a2ce\") " pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:33.513351 kubelet[3714]: I0123 23:56:33.511452 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqd4z\" (UniqueName: \"kubernetes.io/projected/e511819f-7fe1-47d1-b5b7-5258bf08f097-kube-api-access-tqd4z\") pod \"calico-apiserver-6d5497fbb7-6rwm7\" (UID: \"e511819f-7fe1-47d1-b5b7-5258bf08f097\") " pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" Jan 23 23:56:33.513351 kubelet[3714]: I0123 23:56:33.511495 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/70864b69-f424-425f-943d-f03fcd5d49da-calico-apiserver-certs\") pod \"calico-apiserver-6d5497fbb7-xhxnx\" (UID: \"70864b69-f424-425f-943d-f03fcd5d49da\") " pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" Jan 23 23:56:33.513351 kubelet[3714]: I0123 23:56:33.511544 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dt7w\" (UniqueName: \"kubernetes.io/projected/8b558c76-7f3e-4806-8d02-51d3c08c8f13-kube-api-access-8dt7w\") pod \"coredns-668d6bf9bc-dxn2k\" (UID: \"8b558c76-7f3e-4806-8d02-51d3c08c8f13\") " pod="kube-system/coredns-668d6bf9bc-dxn2k" Jan 23 23:56:33.513843 kubelet[3714]: I0123 23:56:33.511584 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2-tigera-ca-bundle\") pod \"calico-kube-controllers-6677f6f656-js6vm\" (UID: \"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2\") " pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" Jan 23 23:56:33.513843 kubelet[3714]: I0123 23:56:33.511622 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cbxv\" (UniqueName: \"kubernetes.io/projected/9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2-kube-api-access-7cbxv\") pod \"calico-kube-controllers-6677f6f656-js6vm\" (UID: \"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2\") " pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" Jan 23 23:56:33.513843 kubelet[3714]: I0123 23:56:33.511673 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lv6v\" (UniqueName: \"kubernetes.io/projected/70864b69-f424-425f-943d-f03fcd5d49da-kube-api-access-7lv6v\") pod \"calico-apiserver-6d5497fbb7-xhxnx\" (UID: \"70864b69-f424-425f-943d-f03fcd5d49da\") " pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" Jan 23 23:56:33.513843 kubelet[3714]: I0123 23:56:33.511723 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8b558c76-7f3e-4806-8d02-51d3c08c8f13-config-volume\") pod \"coredns-668d6bf9bc-dxn2k\" (UID: \"8b558c76-7f3e-4806-8d02-51d3c08c8f13\") " pod="kube-system/coredns-668d6bf9bc-dxn2k" Jan 23 23:56:33.513843 kubelet[3714]: I0123 23:56:33.511767 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e511819f-7fe1-47d1-b5b7-5258bf08f097-calico-apiserver-certs\") pod \"calico-apiserver-6d5497fbb7-6rwm7\" (UID: \"e511819f-7fe1-47d1-b5b7-5258bf08f097\") " pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" Jan 23 23:56:33.514214 kubelet[3714]: I0123 23:56:33.511807 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-backend-key-pair\") pod \"whisker-6db99dc799-kdht2\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " pod="calico-system/whisker-6db99dc799-kdht2" Jan 23 23:56:33.514214 kubelet[3714]: I0123 23:56:33.511848 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz7wk\" (UniqueName: \"kubernetes.io/projected/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-kube-api-access-jz7wk\") pod \"whisker-6db99dc799-kdht2\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " pod="calico-system/whisker-6db99dc799-kdht2" Jan 23 23:56:33.514214 kubelet[3714]: I0123 23:56:33.511895 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/062e26d7-bfb2-4194-8340-6fddf424a2ce-config\") pod \"goldmane-666569f655-qtwrz\" (UID: \"062e26d7-bfb2-4194-8340-6fddf424a2ce\") " pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:33.514214 kubelet[3714]: I0123 23:56:33.511943 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/062e26d7-bfb2-4194-8340-6fddf424a2ce-goldmane-key-pair\") pod \"goldmane-666569f655-qtwrz\" (UID: \"062e26d7-bfb2-4194-8340-6fddf424a2ce\") " pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:33.514214 kubelet[3714]: I0123 23:56:33.511981 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-ca-bundle\") pod \"whisker-6db99dc799-kdht2\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " pod="calico-system/whisker-6db99dc799-kdht2" Jan 23 23:56:33.643436 containerd[2180]: time="2026-01-23T23:56:33.641692539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9ddld,Uid:e5b6c2c1-0276-4f5f-9587-f464f0aab16d,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:33.716013 containerd[2180]: time="2026-01-23T23:56:33.715525935Z" level=info msg="shim disconnected" id=946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055 namespace=k8s.io Jan 23 23:56:33.716013 containerd[2180]: time="2026-01-23T23:56:33.715621551Z" level=warning msg="cleaning up after shim disconnected" id=946098616baf2d158974e3539d94236132dd8cd4aaf5b45f699bd2eca0d4d055 namespace=k8s.io Jan 23 23:56:33.716013 containerd[2180]: time="2026-01-23T23:56:33.715643679Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:56:33.736901 containerd[2180]: time="2026-01-23T23:56:33.736315863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-md8cr,Uid:ef69f672-ed17-43f4-a4a8-8456f661673c,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:33.748150 containerd[2180]: time="2026-01-23T23:56:33.747705051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxn2k,Uid:8b558c76-7f3e-4806-8d02-51d3c08c8f13,Namespace:kube-system,Attempt:0,}" Jan 23 23:56:33.786290 containerd[2180]: time="2026-01-23T23:56:33.786110751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-6rwm7,Uid:e511819f-7fe1-47d1-b5b7-5258bf08f097,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:56:33.787312 containerd[2180]: time="2026-01-23T23:56:33.787234743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-xhxnx,Uid:70864b69-f424-425f-943d-f03fcd5d49da,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:56:33.807043 containerd[2180]: time="2026-01-23T23:56:33.804444015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db99dc799-kdht2,Uid:c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:33.964806 containerd[2180]: time="2026-01-23T23:56:33.964263124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6677f6f656-js6vm,Uid:9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:33.996345 containerd[2180]: time="2026-01-23T23:56:33.996257836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwrz,Uid:062e26d7-bfb2-4194-8340-6fddf424a2ce,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:34.022877 containerd[2180]: time="2026-01-23T23:56:34.022825849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:56:34.266419 containerd[2180]: time="2026-01-23T23:56:34.264520718Z" level=error msg="Failed to destroy network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.275143 containerd[2180]: time="2026-01-23T23:56:34.273862226Z" level=error msg="encountered an error cleaning up failed sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.275143 containerd[2180]: time="2026-01-23T23:56:34.273970778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-6rwm7,Uid:e511819f-7fe1-47d1-b5b7-5258bf08f097,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.276593 kubelet[3714]: E0123 23:56:34.274434 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.276593 kubelet[3714]: E0123 23:56:34.274983 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" Jan 23 23:56:34.276593 kubelet[3714]: E0123 23:56:34.275053 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" Jan 23 23:56:34.280541 kubelet[3714]: E0123 23:56:34.275152 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:56:34.308842 containerd[2180]: time="2026-01-23T23:56:34.308760230Z" level=error msg="Failed to destroy network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.323423 containerd[2180]: time="2026-01-23T23:56:34.318897386Z" level=error msg="encountered an error cleaning up failed sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.323423 containerd[2180]: time="2026-01-23T23:56:34.319010654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9ddld,Uid:e5b6c2c1-0276-4f5f-9587-f464f0aab16d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.324932 kubelet[3714]: E0123 23:56:34.319313 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.324932 kubelet[3714]: E0123 23:56:34.320740 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9ddld" Jan 23 23:56:34.324932 kubelet[3714]: E0123 23:56:34.320826 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-9ddld" Jan 23 23:56:34.325161 kubelet[3714]: E0123 23:56:34.320957 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-9ddld_kube-system(e5b6c2c1-0276-4f5f-9587-f464f0aab16d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-9ddld_kube-system(e5b6c2c1-0276-4f5f-9587-f464f0aab16d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9ddld" podUID="e5b6c2c1-0276-4f5f-9587-f464f0aab16d" Jan 23 23:56:34.331607 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf-shm.mount: Deactivated successfully. Jan 23 23:56:34.333139 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495-shm.mount: Deactivated successfully. Jan 23 23:56:34.337224 containerd[2180]: time="2026-01-23T23:56:34.337009658Z" level=error msg="Failed to destroy network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.341754 containerd[2180]: time="2026-01-23T23:56:34.339854654Z" level=error msg="encountered an error cleaning up failed sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.341754 containerd[2180]: time="2026-01-23T23:56:34.339957734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-md8cr,Uid:ef69f672-ed17-43f4-a4a8-8456f661673c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.342939 kubelet[3714]: E0123 23:56:34.340268 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.342939 kubelet[3714]: E0123 23:56:34.340350 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:34.342939 kubelet[3714]: E0123 23:56:34.340409 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-md8cr" Jan 23 23:56:34.344582 kubelet[3714]: E0123 23:56:34.340475 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:34.348320 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6-shm.mount: Deactivated successfully. Jan 23 23:56:34.371288 containerd[2180]: time="2026-01-23T23:56:34.370747046Z" level=error msg="Failed to destroy network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.381181 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c-shm.mount: Deactivated successfully. Jan 23 23:56:34.387604 containerd[2180]: time="2026-01-23T23:56:34.386768642Z" level=error msg="encountered an error cleaning up failed sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.387604 containerd[2180]: time="2026-01-23T23:56:34.386867030Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxn2k,Uid:8b558c76-7f3e-4806-8d02-51d3c08c8f13,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.387833 kubelet[3714]: E0123 23:56:34.387167 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.387833 kubelet[3714]: E0123 23:56:34.387281 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dxn2k" Jan 23 23:56:34.387833 kubelet[3714]: E0123 23:56:34.387347 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dxn2k" Jan 23 23:56:34.388955 kubelet[3714]: E0123 23:56:34.387469 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dxn2k_kube-system(8b558c76-7f3e-4806-8d02-51d3c08c8f13)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dxn2k_kube-system(8b558c76-7f3e-4806-8d02-51d3c08c8f13)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dxn2k" podUID="8b558c76-7f3e-4806-8d02-51d3c08c8f13" Jan 23 23:56:34.392695 containerd[2180]: time="2026-01-23T23:56:34.392509310Z" level=error msg="Failed to destroy network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.398538 containerd[2180]: time="2026-01-23T23:56:34.398474114Z" level=error msg="encountered an error cleaning up failed sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.398896 containerd[2180]: time="2026-01-23T23:56:34.398769386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6db99dc799-kdht2,Uid:c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.400775 kubelet[3714]: E0123 23:56:34.400705 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.400939 kubelet[3714]: E0123 23:56:34.400811 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6db99dc799-kdht2" Jan 23 23:56:34.400939 kubelet[3714]: E0123 23:56:34.400852 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6db99dc799-kdht2" Jan 23 23:56:34.401086 kubelet[3714]: E0123 23:56:34.400926 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6db99dc799-kdht2_calico-system(c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6db99dc799-kdht2_calico-system(c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6db99dc799-kdht2" podUID="c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" Jan 23 23:56:34.401514 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b-shm.mount: Deactivated successfully. Jan 23 23:56:34.408108 containerd[2180]: time="2026-01-23T23:56:34.407564090Z" level=error msg="Failed to destroy network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.412427 containerd[2180]: time="2026-01-23T23:56:34.411814863Z" level=error msg="encountered an error cleaning up failed sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.412427 containerd[2180]: time="2026-01-23T23:56:34.411920643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-xhxnx,Uid:70864b69-f424-425f-943d-f03fcd5d49da,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.414293 kubelet[3714]: E0123 23:56:34.412870 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.414293 kubelet[3714]: E0123 23:56:34.412948 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" Jan 23 23:56:34.414293 kubelet[3714]: E0123 23:56:34.412981 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" Jan 23 23:56:34.414618 kubelet[3714]: E0123 23:56:34.413041 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:56:34.463739 containerd[2180]: time="2026-01-23T23:56:34.463583511Z" level=error msg="Failed to destroy network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.465425 containerd[2180]: time="2026-01-23T23:56:34.465237015Z" level=error msg="encountered an error cleaning up failed sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.465703 containerd[2180]: time="2026-01-23T23:56:34.465555387Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6677f6f656-js6vm,Uid:9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.466589 kubelet[3714]: E0123 23:56:34.466362 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.466755 kubelet[3714]: E0123 23:56:34.466629 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" Jan 23 23:56:34.466889 kubelet[3714]: E0123 23:56:34.466814 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" Jan 23 23:56:34.467030 kubelet[3714]: E0123 23:56:34.466948 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:56:34.477724 containerd[2180]: time="2026-01-23T23:56:34.477640695Z" level=error msg="Failed to destroy network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.478274 containerd[2180]: time="2026-01-23T23:56:34.478211559Z" level=error msg="encountered an error cleaning up failed sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.478465 containerd[2180]: time="2026-01-23T23:56:34.478326063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwrz,Uid:062e26d7-bfb2-4194-8340-6fddf424a2ce,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.478784 kubelet[3714]: E0123 23:56:34.478696 3714 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:34.478784 kubelet[3714]: E0123 23:56:34.478773 3714 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:34.478935 kubelet[3714]: E0123 23:56:34.478806 3714 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-qtwrz" Jan 23 23:56:34.478935 kubelet[3714]: E0123 23:56:34.478881 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:56:35.021542 kubelet[3714]: I0123 23:56:35.020846 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:35.023569 containerd[2180]: time="2026-01-23T23:56:35.023521970Z" level=info msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" Jan 23 23:56:35.026434 containerd[2180]: time="2026-01-23T23:56:35.024882842Z" level=info msg="Ensure that sandbox 3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b in task-service has been cleanup successfully" Jan 23 23:56:35.026569 kubelet[3714]: I0123 23:56:35.025828 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:35.027238 containerd[2180]: time="2026-01-23T23:56:35.027190202Z" level=info msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" Jan 23 23:56:35.028689 containerd[2180]: time="2026-01-23T23:56:35.028635758Z" level=info msg="Ensure that sandbox 942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b in task-service has been cleanup successfully" Jan 23 23:56:35.033832 kubelet[3714]: I0123 23:56:35.033181 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:35.036322 containerd[2180]: time="2026-01-23T23:56:35.036273614Z" level=info msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" Jan 23 23:56:35.038602 kubelet[3714]: I0123 23:56:35.037917 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:35.040000 containerd[2180]: time="2026-01-23T23:56:35.039933158Z" level=info msg="Ensure that sandbox 0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495 in task-service has been cleanup successfully" Jan 23 23:56:35.044362 containerd[2180]: time="2026-01-23T23:56:35.044272874Z" level=info msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" Jan 23 23:56:35.047542 containerd[2180]: time="2026-01-23T23:56:35.046467818Z" level=info msg="Ensure that sandbox 21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf in task-service has been cleanup successfully" Jan 23 23:56:35.055270 kubelet[3714]: I0123 23:56:35.055232 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:35.058520 containerd[2180]: time="2026-01-23T23:56:35.058464278Z" level=info msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" Jan 23 23:56:35.060988 containerd[2180]: time="2026-01-23T23:56:35.060908150Z" level=info msg="Ensure that sandbox bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c in task-service has been cleanup successfully" Jan 23 23:56:35.062333 kubelet[3714]: I0123 23:56:35.062297 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:35.064745 containerd[2180]: time="2026-01-23T23:56:35.064692710Z" level=info msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" Jan 23 23:56:35.073765 containerd[2180]: time="2026-01-23T23:56:35.073708406Z" level=info msg="Ensure that sandbox 47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785 in task-service has been cleanup successfully" Jan 23 23:56:35.077418 kubelet[3714]: I0123 23:56:35.076658 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:35.078009 containerd[2180]: time="2026-01-23T23:56:35.077962130Z" level=info msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" Jan 23 23:56:35.085555 containerd[2180]: time="2026-01-23T23:56:35.084366998Z" level=info msg="Ensure that sandbox 0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b in task-service has been cleanup successfully" Jan 23 23:56:35.089446 kubelet[3714]: I0123 23:56:35.089408 3714 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:35.090688 containerd[2180]: time="2026-01-23T23:56:35.090478238Z" level=info msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" Jan 23 23:56:35.090858 containerd[2180]: time="2026-01-23T23:56:35.090788666Z" level=info msg="Ensure that sandbox 064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6 in task-service has been cleanup successfully" Jan 23 23:56:35.235025 containerd[2180]: time="2026-01-23T23:56:35.232963155Z" level=error msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" failed" error="failed to destroy network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.237211 kubelet[3714]: E0123 23:56:35.237158 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:35.237529 kubelet[3714]: E0123 23:56:35.237464 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b"} Jan 23 23:56:35.237686 kubelet[3714]: E0123 23:56:35.237658 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.237926 kubelet[3714]: E0123 23:56:35.237865 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6db99dc799-kdht2" podUID="c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" Jan 23 23:56:35.244953 containerd[2180]: time="2026-01-23T23:56:35.244864875Z" level=error msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" failed" error="failed to destroy network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.248804 kubelet[3714]: E0123 23:56:35.248665 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:35.249043 kubelet[3714]: E0123 23:56:35.249008 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf"} Jan 23 23:56:35.249195 kubelet[3714]: E0123 23:56:35.249166 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e511819f-7fe1-47d1-b5b7-5258bf08f097\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.249656 kubelet[3714]: E0123 23:56:35.249421 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e511819f-7fe1-47d1-b5b7-5258bf08f097\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:56:35.263264 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785-shm.mount: Deactivated successfully. Jan 23 23:56:35.263650 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b-shm.mount: Deactivated successfully. Jan 23 23:56:35.263925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b-shm.mount: Deactivated successfully. Jan 23 23:56:35.271306 containerd[2180]: time="2026-01-23T23:56:35.270493935Z" level=error msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" failed" error="failed to destroy network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.271482 kubelet[3714]: E0123 23:56:35.270848 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:35.271482 kubelet[3714]: E0123 23:56:35.270915 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495"} Jan 23 23:56:35.271482 kubelet[3714]: E0123 23:56:35.270970 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5b6c2c1-0276-4f5f-9587-f464f0aab16d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.271482 kubelet[3714]: E0123 23:56:35.271015 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5b6c2c1-0276-4f5f-9587-f464f0aab16d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-9ddld" podUID="e5b6c2c1-0276-4f5f-9587-f464f0aab16d" Jan 23 23:56:35.324013 containerd[2180]: time="2026-01-23T23:56:35.322860771Z" level=error msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" failed" error="failed to destroy network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.324336 kubelet[3714]: E0123 23:56:35.323584 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:35.324336 kubelet[3714]: E0123 23:56:35.323651 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b"} Jan 23 23:56:35.324336 kubelet[3714]: E0123 23:56:35.323705 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.324336 kubelet[3714]: E0123 23:56:35.323752 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:56:35.335713 containerd[2180]: time="2026-01-23T23:56:35.335626599Z" level=error msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" failed" error="failed to destroy network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.336303 kubelet[3714]: E0123 23:56:35.335969 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:35.336303 kubelet[3714]: E0123 23:56:35.336038 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c"} Jan 23 23:56:35.336303 kubelet[3714]: E0123 23:56:35.336098 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8b558c76-7f3e-4806-8d02-51d3c08c8f13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.336303 kubelet[3714]: E0123 23:56:35.336154 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8b558c76-7f3e-4806-8d02-51d3c08c8f13\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dxn2k" podUID="8b558c76-7f3e-4806-8d02-51d3c08c8f13" Jan 23 23:56:35.345705 containerd[2180]: time="2026-01-23T23:56:35.345616203Z" level=error msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" failed" error="failed to destroy network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.349213 kubelet[3714]: E0123 23:56:35.348569 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:35.349213 kubelet[3714]: E0123 23:56:35.348657 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785"} Jan 23 23:56:35.349213 kubelet[3714]: E0123 23:56:35.348713 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"062e26d7-bfb2-4194-8340-6fddf424a2ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.349213 kubelet[3714]: E0123 23:56:35.348754 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"062e26d7-bfb2-4194-8340-6fddf424a2ce\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:56:35.357800 containerd[2180]: time="2026-01-23T23:56:35.357628131Z" level=error msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" failed" error="failed to destroy network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.358005 kubelet[3714]: E0123 23:56:35.357940 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:35.358082 kubelet[3714]: E0123 23:56:35.358022 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b"} Jan 23 23:56:35.358175 kubelet[3714]: E0123 23:56:35.358081 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"70864b69-f424-425f-943d-f03fcd5d49da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.358175 kubelet[3714]: E0123 23:56:35.358123 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"70864b69-f424-425f-943d-f03fcd5d49da\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:56:35.360691 containerd[2180]: time="2026-01-23T23:56:35.360500151Z" level=error msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" failed" error="failed to destroy network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:56:35.360966 kubelet[3714]: E0123 23:56:35.360908 3714 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:35.361064 kubelet[3714]: E0123 23:56:35.361001 3714 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6"} Jan 23 23:56:35.361131 kubelet[3714]: E0123 23:56:35.361086 3714 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ef69f672-ed17-43f4-a4a8-8456f661673c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:56:35.361253 kubelet[3714]: E0123 23:56:35.361127 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ef69f672-ed17-43f4-a4a8-8456f661673c\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:40.963798 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1823412127.mount: Deactivated successfully. Jan 23 23:56:41.031455 containerd[2180]: time="2026-01-23T23:56:41.031152091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:41.034038 containerd[2180]: time="2026-01-23T23:56:41.033964687Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:56:41.036238 containerd[2180]: time="2026-01-23T23:56:41.036145015Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:41.043418 containerd[2180]: time="2026-01-23T23:56:41.042232507Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:56:41.043797 containerd[2180]: time="2026-01-23T23:56:41.043746811Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.02064429s" Jan 23 23:56:41.043945 containerd[2180]: time="2026-01-23T23:56:41.043913527Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:56:41.089841 containerd[2180]: time="2026-01-23T23:56:41.088642124Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:56:41.125375 containerd[2180]: time="2026-01-23T23:56:41.125288504Z" level=info msg="CreateContainer within sandbox \"91da1951e8ab39058c46774e2593482eb4ca34d4c31dae8d9efba8183364baff\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"64a90ac0d72c05b8f4acddd8c4d4264789d5205c8cbe835455c24ff1b33b7d86\"" Jan 23 23:56:41.127690 containerd[2180]: time="2026-01-23T23:56:41.126355652Z" level=info msg="StartContainer for \"64a90ac0d72c05b8f4acddd8c4d4264789d5205c8cbe835455c24ff1b33b7d86\"" Jan 23 23:56:41.270610 containerd[2180]: time="2026-01-23T23:56:41.270424905Z" level=info msg="StartContainer for \"64a90ac0d72c05b8f4acddd8c4d4264789d5205c8cbe835455c24ff1b33b7d86\" returns successfully" Jan 23 23:56:41.545325 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:56:41.545583 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:56:41.795894 containerd[2180]: time="2026-01-23T23:56:41.795801887Z" level=info msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" Jan 23 23:56:42.222366 kubelet[3714]: I0123 23:56:42.222274 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xn285" podStartSLOduration=2.873476204 podStartE2EDuration="19.222243969s" podCreationTimestamp="2026-01-23 23:56:23 +0000 UTC" firstStartedPulling="2026-01-23 23:56:24.697183218 +0000 UTC m=+36.272376505" lastFinishedPulling="2026-01-23 23:56:41.045950971 +0000 UTC m=+52.621144270" observedRunningTime="2026-01-23 23:56:42.220309377 +0000 UTC m=+53.795502700" watchObservedRunningTime="2026-01-23 23:56:42.222243969 +0000 UTC m=+53.797437256" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.309 [INFO][4895] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.313 [INFO][4895] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" iface="eth0" netns="/var/run/netns/cni-0ad5addb-eff3-54ed-3dc5-1e0516a05240" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.317 [INFO][4895] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" iface="eth0" netns="/var/run/netns/cni-0ad5addb-eff3-54ed-3dc5-1e0516a05240" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.324 [INFO][4895] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" iface="eth0" netns="/var/run/netns/cni-0ad5addb-eff3-54ed-3dc5-1e0516a05240" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.324 [INFO][4895] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.325 [INFO][4895] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.469 [INFO][4907] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.471 [INFO][4907] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.471 [INFO][4907] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.499 [WARNING][4907] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.499 [INFO][4907] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.503 [INFO][4907] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:42.518124 containerd[2180]: 2026-01-23 23:56:42.512 [INFO][4895] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:42.520830 containerd[2180]: time="2026-01-23T23:56:42.519570287Z" level=info msg="TearDown network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" successfully" Jan 23 23:56:42.520830 containerd[2180]: time="2026-01-23T23:56:42.519619103Z" level=info msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" returns successfully" Jan 23 23:56:42.527271 systemd[1]: run-netns-cni\x2d0ad5addb\x2deff3\x2d54ed\x2d3dc5\x2d1e0516a05240.mount: Deactivated successfully. Jan 23 23:56:42.603509 kubelet[3714]: I0123 23:56:42.603427 3714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-ca-bundle\") pod \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " Jan 23 23:56:42.603664 kubelet[3714]: I0123 23:56:42.603534 3714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz7wk\" (UniqueName: \"kubernetes.io/projected/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-kube-api-access-jz7wk\") pod \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " Jan 23 23:56:42.603664 kubelet[3714]: I0123 23:56:42.603598 3714 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-backend-key-pair\") pod \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\" (UID: \"c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7\") " Jan 23 23:56:42.605073 kubelet[3714]: I0123 23:56:42.604479 3714 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" (UID: "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:42.612775 kubelet[3714]: I0123 23:56:42.612702 3714 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" (UID: "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:56:42.613607 kubelet[3714]: I0123 23:56:42.613568 3714 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-kube-api-access-jz7wk" (OuterVolumeSpecName: "kube-api-access-jz7wk") pod "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" (UID: "c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7"). InnerVolumeSpecName "kube-api-access-jz7wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:42.615719 systemd[1]: var-lib-kubelet-pods-c7e78e36\x2d50da\x2d4ee2\x2d9fcd\x2d6e959fe6a1f7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djz7wk.mount: Deactivated successfully. Jan 23 23:56:42.616172 systemd[1]: var-lib-kubelet-pods-c7e78e36\x2d50da\x2d4ee2\x2d9fcd\x2d6e959fe6a1f7-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:56:42.704474 kubelet[3714]: I0123 23:56:42.704376 3714 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-backend-key-pair\") on node \"ip-172-31-18-35\" DevicePath \"\"" Jan 23 23:56:42.704474 kubelet[3714]: I0123 23:56:42.704466 3714 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-whisker-ca-bundle\") on node \"ip-172-31-18-35\" DevicePath \"\"" Jan 23 23:56:42.704812 kubelet[3714]: I0123 23:56:42.704490 3714 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jz7wk\" (UniqueName: \"kubernetes.io/projected/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7-kube-api-access-jz7wk\") on node \"ip-172-31-18-35\" DevicePath \"\"" Jan 23 23:56:43.317447 kubelet[3714]: I0123 23:56:43.315508 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/78189c5a-8a21-4a26-9446-2683d6716342-whisker-backend-key-pair\") pod \"whisker-5b885984c9-qcp7h\" (UID: \"78189c5a-8a21-4a26-9446-2683d6716342\") " pod="calico-system/whisker-5b885984c9-qcp7h" Jan 23 23:56:43.317447 kubelet[3714]: I0123 23:56:43.315577 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/78189c5a-8a21-4a26-9446-2683d6716342-whisker-ca-bundle\") pod \"whisker-5b885984c9-qcp7h\" (UID: \"78189c5a-8a21-4a26-9446-2683d6716342\") " pod="calico-system/whisker-5b885984c9-qcp7h" Jan 23 23:56:43.317447 kubelet[3714]: I0123 23:56:43.315624 3714 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8tjk\" (UniqueName: \"kubernetes.io/projected/78189c5a-8a21-4a26-9446-2683d6716342-kube-api-access-n8tjk\") pod \"whisker-5b885984c9-qcp7h\" (UID: \"78189c5a-8a21-4a26-9446-2683d6716342\") " pod="calico-system/whisker-5b885984c9-qcp7h" Jan 23 23:56:43.611725 containerd[2180]: time="2026-01-23T23:56:43.611564484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b885984c9-qcp7h,Uid:78189c5a-8a21-4a26-9446-2683d6716342,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:44.091638 systemd[1]: Started sshd@7-172.31.18.35:22-4.153.228.146:55056.service - OpenSSH per-connection server daemon (4.153.228.146:55056). Jan 23 23:56:44.235834 (udev-worker)[4880]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:44.247936 systemd-networkd[1714]: cali7ebff8b16b3: Link UP Jan 23 23:56:44.251733 systemd-networkd[1714]: cali7ebff8b16b3: Gained carrier Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:43.814 [INFO][4994] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:43.881 [INFO][4994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0 whisker-5b885984c9- calico-system 78189c5a-8a21-4a26-9446-2683d6716342 983 0 2026-01-23 23:56:43 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5b885984c9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-35 whisker-5b885984c9-qcp7h eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali7ebff8b16b3 [] [] }} ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:43.883 [INFO][4994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.017 [INFO][5048] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" HandleID="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Workload="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.018 [INFO][5048] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" HandleID="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Workload="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000349de0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-35", "pod":"whisker-5b885984c9-qcp7h", "timestamp":"2026-01-23 23:56:44.017855134 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.018 [INFO][5048] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.018 [INFO][5048] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.018 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.046 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.071 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.092 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.098 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.106 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.106 [INFO][5048] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.112 [INFO][5048] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.126 [INFO][5048] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.154 [INFO][5048] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.155 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" host="ip-172-31-18-35" Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.157 [INFO][5048] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:44.343361 containerd[2180]: 2026-01-23 23:56:44.166 [INFO][5048] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" HandleID="k8s-pod-network.68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Workload="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.187 [INFO][4994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0", GenerateName:"whisker-5b885984c9-", Namespace:"calico-system", SelfLink:"", UID:"78189c5a-8a21-4a26-9446-2683d6716342", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b885984c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"whisker-5b885984c9-qcp7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ebff8b16b3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.188 [INFO][4994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.1/32] ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.188 [INFO][4994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7ebff8b16b3 ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.260 [INFO][4994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.261 [INFO][4994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0", GenerateName:"whisker-5b885984c9-", Namespace:"calico-system", SelfLink:"", UID:"78189c5a-8a21-4a26-9446-2683d6716342", ResourceVersion:"983", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5b885984c9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f", Pod:"whisker-5b885984c9-qcp7h", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali7ebff8b16b3", MAC:"82:8b:14:a0:e5:b3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:44.360143 containerd[2180]: 2026-01-23 23:56:44.307 [INFO][4994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f" Namespace="calico-system" Pod="whisker-5b885984c9-qcp7h" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--5b885984c9--qcp7h-eth0" Jan 23 23:56:44.344877 systemd[1]: run-containerd-runc-k8s.io-64a90ac0d72c05b8f4acddd8c4d4264789d5205c8cbe835455c24ff1b33b7d86-runc.JV08j6.mount: Deactivated successfully. Jan 23 23:56:44.449329 containerd[2180]: time="2026-01-23T23:56:44.446677968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:44.449329 containerd[2180]: time="2026-01-23T23:56:44.446801568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:44.449329 containerd[2180]: time="2026-01-23T23:56:44.448181916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:44.449329 containerd[2180]: time="2026-01-23T23:56:44.448522332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:44.735624 sshd[5055]: Accepted publickey for core from 4.153.228.146 port 55056 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:44.742711 containerd[2180]: time="2026-01-23T23:56:44.741756854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5b885984c9-qcp7h,Uid:78189c5a-8a21-4a26-9446-2683d6716342,Namespace:calico-system,Attempt:0,} returns sandbox id \"68ec0f97a2e78b7b2ae37540d090347ed6b40736e9d53cc5b467cc45d03ba78f\"" Jan 23 23:56:44.748146 sshd[5055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:44.759696 containerd[2180]: time="2026-01-23T23:56:44.755269766Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:56:44.763264 kubelet[3714]: I0123 23:56:44.763183 3714 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7" path="/var/lib/kubelet/pods/c7e78e36-50da-4ee2-9fcd-6e959fe6a1f7/volumes" Jan 23 23:56:44.788614 systemd-logind[2134]: New session 8 of user core. Jan 23 23:56:44.799996 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:56:45.136350 containerd[2180]: time="2026-01-23T23:56:45.135875280Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:45.142675 containerd[2180]: time="2026-01-23T23:56:45.141228120Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:56:45.142675 containerd[2180]: time="2026-01-23T23:56:45.142451532Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:56:45.143615 kubelet[3714]: E0123 23:56:45.143228 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:45.146428 kubelet[3714]: E0123 23:56:45.143864 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:45.181220 kubelet[3714]: E0123 23:56:45.177471 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be7b206d06df401fa8cc56417b3a1000,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:45.189868 containerd[2180]: time="2026-01-23T23:56:45.189752772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:56:45.482799 containerd[2180]: time="2026-01-23T23:56:45.481382101Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:45.484868 containerd[2180]: time="2026-01-23T23:56:45.484654658Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:56:45.485247 containerd[2180]: time="2026-01-23T23:56:45.485051210Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:45.496174 kubelet[3714]: E0123 23:56:45.494961 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:45.496174 kubelet[3714]: E0123 23:56:45.495116 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:45.496449 kubelet[3714]: E0123 23:56:45.495364 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:45.497509 kernel: bpftool[5160]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:56:45.504083 kubelet[3714]: E0123 23:56:45.503732 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:56:45.511415 sshd[5055]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:45.527591 systemd[1]: sshd@7-172.31.18.35:22-4.153.228.146:55056.service: Deactivated successfully. Jan 23 23:56:45.536101 systemd-logind[2134]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:56:45.537598 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:56:45.541761 systemd-logind[2134]: Removed session 8. Jan 23 23:56:45.761745 systemd-networkd[1714]: cali7ebff8b16b3: Gained IPv6LL Jan 23 23:56:45.970608 systemd-networkd[1714]: vxlan.calico: Link UP Jan 23 23:56:45.970622 systemd-networkd[1714]: vxlan.calico: Gained carrier Jan 23 23:56:46.062165 (udev-worker)[4879]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:46.198149 kubelet[3714]: E0123 23:56:46.195784 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:56:46.729854 containerd[2180]: time="2026-01-23T23:56:46.728420656Z" level=info msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" Jan 23 23:56:46.733204 containerd[2180]: time="2026-01-23T23:56:46.730598464Z" level=info msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.912 [INFO][5283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.912 [INFO][5283] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" iface="eth0" netns="/var/run/netns/cni-700ffdad-ac56-3a9e-cc66-6f52853ea22f" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.915 [INFO][5283] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" iface="eth0" netns="/var/run/netns/cni-700ffdad-ac56-3a9e-cc66-6f52853ea22f" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.915 [INFO][5283] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" iface="eth0" netns="/var/run/netns/cni-700ffdad-ac56-3a9e-cc66-6f52853ea22f" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.915 [INFO][5283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.915 [INFO][5283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.994 [INFO][5300] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.994 [INFO][5300] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:46.994 [INFO][5300] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:47.007 [WARNING][5300] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:47.007 [INFO][5300] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:47.010 [INFO][5300] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:47.019248 containerd[2180]: 2026-01-23 23:56:47.014 [INFO][5283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:47.021375 containerd[2180]: time="2026-01-23T23:56:47.020109973Z" level=info msg="TearDown network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" successfully" Jan 23 23:56:47.021375 containerd[2180]: time="2026-01-23T23:56:47.020167393Z" level=info msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" returns successfully" Jan 23 23:56:47.021375 containerd[2180]: time="2026-01-23T23:56:47.025664437Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-6rwm7,Uid:e511819f-7fe1-47d1-b5b7-5258bf08f097,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:56:47.026260 systemd[1]: run-netns-cni\x2d700ffdad\x2dac56\x2d3a9e\x2dcc66\x2d6f52853ea22f.mount: Deactivated successfully. Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.896 [INFO][5282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.898 [INFO][5282] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" iface="eth0" netns="/var/run/netns/cni-f7e16109-a414-bd52-71f9-58139725daf8" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.900 [INFO][5282] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" iface="eth0" netns="/var/run/netns/cni-f7e16109-a414-bd52-71f9-58139725daf8" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.905 [INFO][5282] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" iface="eth0" netns="/var/run/netns/cni-f7e16109-a414-bd52-71f9-58139725daf8" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.906 [INFO][5282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.906 [INFO][5282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.994 [INFO][5298] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:46.997 [INFO][5298] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:47.010 [INFO][5298] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:47.035 [WARNING][5298] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:47.036 [INFO][5298] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:47.039 [INFO][5298] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:47.048529 containerd[2180]: 2026-01-23 23:56:47.042 [INFO][5282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:47.050531 containerd[2180]: time="2026-01-23T23:56:47.049236145Z" level=info msg="TearDown network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" successfully" Jan 23 23:56:47.050531 containerd[2180]: time="2026-01-23T23:56:47.049286329Z" level=info msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" returns successfully" Jan 23 23:56:47.052301 containerd[2180]: time="2026-01-23T23:56:47.052109185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxn2k,Uid:8b558c76-7f3e-4806-8d02-51d3c08c8f13,Namespace:kube-system,Attempt:1,}" Jan 23 23:56:47.056759 systemd[1]: run-netns-cni\x2df7e16109\x2da414\x2dbd52\x2d71f9\x2d58139725daf8.mount: Deactivated successfully. Jan 23 23:56:47.381896 (udev-worker)[5224]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:47.384825 systemd-networkd[1714]: caliadb821dde35: Link UP Jan 23 23:56:47.388349 systemd-networkd[1714]: caliadb821dde35: Gained carrier Jan 23 23:56:47.425717 systemd-networkd[1714]: vxlan.calico: Gained IPv6LL Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.198 [INFO][5312] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0 calico-apiserver-6d5497fbb7- calico-apiserver e511819f-7fe1-47d1-b5b7-5258bf08f097 1022 0 2026-01-23 23:56:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5497fbb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-35 calico-apiserver-6d5497fbb7-6rwm7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliadb821dde35 [] [] }} ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.199 [INFO][5312] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.284 [INFO][5337] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" HandleID="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.284 [INFO][5337] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" HandleID="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000391810), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-35", "pod":"calico-apiserver-6d5497fbb7-6rwm7", "timestamp":"2026-01-23 23:56:47.284159798 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.284 [INFO][5337] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.284 [INFO][5337] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.284 [INFO][5337] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.300 [INFO][5337] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.308 [INFO][5337] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.316 [INFO][5337] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.320 [INFO][5337] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.324 [INFO][5337] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.324 [INFO][5337] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.327 [INFO][5337] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.334 [INFO][5337] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.356 [INFO][5337] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.356 [INFO][5337] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" host="ip-172-31-18-35" Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.356 [INFO][5337] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:47.440174 containerd[2180]: 2026-01-23 23:56:47.356 [INFO][5337] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" HandleID="k8s-pod-network.825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.373 [INFO][5312] cni-plugin/k8s.go 418: Populated endpoint ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e511819f-7fe1-47d1-b5b7-5258bf08f097", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"calico-apiserver-6d5497fbb7-6rwm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb821dde35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.374 [INFO][5312] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.2/32] ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.374 [INFO][5312] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliadb821dde35 ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.390 [INFO][5312] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.391 [INFO][5312] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e511819f-7fe1-47d1-b5b7-5258bf08f097", ResourceVersion:"1022", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d", Pod:"calico-apiserver-6d5497fbb7-6rwm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb821dde35", MAC:"fe:26:b0:3e:fd:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:47.444198 containerd[2180]: 2026-01-23 23:56:47.427 [INFO][5312] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-6rwm7" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:47.567889 systemd-networkd[1714]: calif665c765f6d: Link UP Jan 23 23:56:47.571280 systemd-networkd[1714]: calif665c765f6d: Gained carrier Jan 23 23:56:47.636582 containerd[2180]: time="2026-01-23T23:56:47.630440476Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:47.636582 containerd[2180]: time="2026-01-23T23:56:47.630528376Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:47.636582 containerd[2180]: time="2026-01-23T23:56:47.630553960Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:47.636582 containerd[2180]: time="2026-01-23T23:56:47.630701680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.201 [INFO][5321] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0 coredns-668d6bf9bc- kube-system 8b558c76-7f3e-4806-8d02-51d3c08c8f13 1021 0 2026-01-23 23:55:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-35 coredns-668d6bf9bc-dxn2k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif665c765f6d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.202 [INFO][5321] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.285 [INFO][5339] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" HandleID="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.286 [INFO][5339] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" HandleID="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000370180), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-35", "pod":"coredns-668d6bf9bc-dxn2k", "timestamp":"2026-01-23 23:56:47.285630086 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.286 [INFO][5339] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.357 [INFO][5339] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.357 [INFO][5339] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.401 [INFO][5339] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.431 [INFO][5339] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.467 [INFO][5339] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.479 [INFO][5339] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.494 [INFO][5339] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.494 [INFO][5339] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.506 [INFO][5339] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8 Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.522 [INFO][5339] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.544 [INFO][5339] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.544 [INFO][5339] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" host="ip-172-31-18-35" Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.544 [INFO][5339] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:47.643008 containerd[2180]: 2026-01-23 23:56:47.544 [INFO][5339] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" HandleID="k8s-pod-network.9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.558 [INFO][5321] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b558c76-7f3e-4806-8d02-51d3c08c8f13", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"coredns-668d6bf9bc-dxn2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif665c765f6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.558 [INFO][5321] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.3/32] ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.558 [INFO][5321] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif665c765f6d ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.576 [INFO][5321] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.580 [INFO][5321] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b558c76-7f3e-4806-8d02-51d3c08c8f13", ResourceVersion:"1021", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8", Pod:"coredns-668d6bf9bc-dxn2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif665c765f6d", MAC:"8a:bd:20:9c:a5:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:47.651547 containerd[2180]: 2026-01-23 23:56:47.613 [INFO][5321] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8" Namespace="kube-system" Pod="coredns-668d6bf9bc-dxn2k" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:47.726691 containerd[2180]: time="2026-01-23T23:56:47.725828885Z" level=info msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" Jan 23 23:56:47.730535 containerd[2180]: time="2026-01-23T23:56:47.730204409Z" level=info msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" Jan 23 23:56:47.739210 containerd[2180]: time="2026-01-23T23:56:47.738869069Z" level=info msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" Jan 23 23:56:47.817776 containerd[2180]: time="2026-01-23T23:56:47.816618497Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:47.818489 containerd[2180]: time="2026-01-23T23:56:47.818102777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:47.820100 containerd[2180]: time="2026-01-23T23:56:47.818302217Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:47.820100 containerd[2180]: time="2026-01-23T23:56:47.819968921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:47.938769 containerd[2180]: time="2026-01-23T23:56:47.936984018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-6rwm7,Uid:e511819f-7fe1-47d1-b5b7-5258bf08f097,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d\"" Jan 23 23:56:47.957265 containerd[2180]: time="2026-01-23T23:56:47.957179586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:48.210752 containerd[2180]: time="2026-01-23T23:56:48.210519747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dxn2k,Uid:8b558c76-7f3e-4806-8d02-51d3c08c8f13,Namespace:kube-system,Attempt:1,} returns sandbox id \"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8\"" Jan 23 23:56:48.225894 containerd[2180]: time="2026-01-23T23:56:48.225659043Z" level=info msg="CreateContainer within sandbox \"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:56:48.264746 containerd[2180]: time="2026-01-23T23:56:48.263954979Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:48.275745 containerd[2180]: time="2026-01-23T23:56:48.274707567Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:48.275745 containerd[2180]: time="2026-01-23T23:56:48.274867863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:48.277444 kubelet[3714]: E0123 23:56:48.276585 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:48.277444 kubelet[3714]: E0123 23:56:48.276664 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:48.292417 kubelet[3714]: E0123 23:56:48.292269 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqd4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:48.294840 kubelet[3714]: E0123 23:56:48.294766 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:56:48.302757 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2324627275.mount: Deactivated successfully. Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.987 [INFO][5445] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.987 [INFO][5445] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" iface="eth0" netns="/var/run/netns/cni-09670570-5508-4982-8cd5-b5473571320c" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.988 [INFO][5445] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" iface="eth0" netns="/var/run/netns/cni-09670570-5508-4982-8cd5-b5473571320c" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.988 [INFO][5445] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" iface="eth0" netns="/var/run/netns/cni-09670570-5508-4982-8cd5-b5473571320c" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.988 [INFO][5445] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:47.988 [INFO][5445] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.237 [INFO][5482] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.237 [INFO][5482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.237 [INFO][5482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.270 [WARNING][5482] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.270 [INFO][5482] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.281 [INFO][5482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:48.324204 containerd[2180]: 2026-01-23 23:56:48.310 [INFO][5445] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:56:48.325445 containerd[2180]: time="2026-01-23T23:56:48.324823024Z" level=info msg="CreateContainer within sandbox \"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a510dd0b3e4f3faa8244ad76b86d044a53bfe640f9675f44108a6811399a72fa\"" Jan 23 23:56:48.326585 containerd[2180]: time="2026-01-23T23:56:48.325613872Z" level=info msg="TearDown network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" successfully" Jan 23 23:56:48.326585 containerd[2180]: time="2026-01-23T23:56:48.325722796Z" level=info msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" returns successfully" Jan 23 23:56:48.336884 containerd[2180]: time="2026-01-23T23:56:48.333155800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwrz,Uid:062e26d7-bfb2-4194-8340-6fddf424a2ce,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:48.339982 containerd[2180]: time="2026-01-23T23:56:48.336746944Z" level=info msg="StartContainer for \"a510dd0b3e4f3faa8244ad76b86d044a53bfe640f9675f44108a6811399a72fa\"" Jan 23 23:56:48.343359 systemd[1]: run-netns-cni\x2d09670570\x2d5508\x2d4982\x2d8cd5\x2db5473571320c.mount: Deactivated successfully. Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.176 [INFO][5446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.176 [INFO][5446] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" iface="eth0" netns="/var/run/netns/cni-4ce85a8b-7d48-b91a-7cd5-02acd0ad6ae2" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.177 [INFO][5446] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" iface="eth0" netns="/var/run/netns/cni-4ce85a8b-7d48-b91a-7cd5-02acd0ad6ae2" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.177 [INFO][5446] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" iface="eth0" netns="/var/run/netns/cni-4ce85a8b-7d48-b91a-7cd5-02acd0ad6ae2" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.178 [INFO][5446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.178 [INFO][5446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.379 [INFO][5512] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.381 [INFO][5512] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.381 [INFO][5512] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.405 [WARNING][5512] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.406 [INFO][5512] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.410 [INFO][5512] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:48.434181 containerd[2180]: 2026-01-23 23:56:48.420 [INFO][5446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:56:48.437412 containerd[2180]: time="2026-01-23T23:56:48.436321324Z" level=info msg="TearDown network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" successfully" Jan 23 23:56:48.437412 containerd[2180]: time="2026-01-23T23:56:48.436375432Z" level=info msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" returns successfully" Jan 23 23:56:48.437625 containerd[2180]: time="2026-01-23T23:56:48.437489356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6677f6f656-js6vm,Uid:9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.147 [INFO][5444] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.147 [INFO][5444] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" iface="eth0" netns="/var/run/netns/cni-4fc51644-f435-32ff-c80e-29795fa8775b" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.154 [INFO][5444] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" iface="eth0" netns="/var/run/netns/cni-4fc51644-f435-32ff-c80e-29795fa8775b" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.162 [INFO][5444] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" iface="eth0" netns="/var/run/netns/cni-4fc51644-f435-32ff-c80e-29795fa8775b" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.163 [INFO][5444] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.163 [INFO][5444] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.384 [INFO][5502] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.385 [INFO][5502] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.411 [INFO][5502] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.432 [WARNING][5502] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.432 [INFO][5502] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.437 [INFO][5502] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:48.455250 containerd[2180]: 2026-01-23 23:56:48.447 [INFO][5444] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:56:48.456104 containerd[2180]: time="2026-01-23T23:56:48.455523532Z" level=info msg="TearDown network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" successfully" Jan 23 23:56:48.456104 containerd[2180]: time="2026-01-23T23:56:48.455691820Z" level=info msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" returns successfully" Jan 23 23:56:48.463838 containerd[2180]: time="2026-01-23T23:56:48.461345224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9ddld,Uid:e5b6c2c1-0276-4f5f-9587-f464f0aab16d,Namespace:kube-system,Attempt:1,}" Jan 23 23:56:48.596826 containerd[2180]: time="2026-01-23T23:56:48.590646281Z" level=info msg="StartContainer for \"a510dd0b3e4f3faa8244ad76b86d044a53bfe640f9675f44108a6811399a72fa\" returns successfully" Jan 23 23:56:48.742057 containerd[2180]: time="2026-01-23T23:56:48.741871830Z" level=info msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" Jan 23 23:56:48.751431 containerd[2180]: time="2026-01-23T23:56:48.750110814Z" level=info msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" Jan 23 23:56:49.030883 systemd-networkd[1714]: calif665c765f6d: Gained IPv6LL Jan 23 23:56:49.083695 systemd[1]: run-netns-cni\x2d4ce85a8b\x2d7d48\x2db91a\x2d7cd5\x2d02acd0ad6ae2.mount: Deactivated successfully. Jan 23 23:56:49.084498 systemd[1]: run-netns-cni\x2d4fc51644\x2df435\x2d32ff\x2dc80e\x2d29795fa8775b.mount: Deactivated successfully. Jan 23 23:56:49.153932 systemd-networkd[1714]: caliadb821dde35: Gained IPv6LL Jan 23 23:56:49.314323 systemd-networkd[1714]: cali4a2da5c944c: Link UP Jan 23 23:56:49.326620 systemd-networkd[1714]: cali4a2da5c944c: Gained carrier Jan 23 23:56:49.360008 kubelet[3714]: E0123 23:56:49.359939 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:48.606 [INFO][5530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0 goldmane-666569f655- calico-system 062e26d7-bfb2-4194-8340-6fddf424a2ce 1041 0 2026-01-23 23:56:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-35 goldmane-666569f655-qtwrz eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4a2da5c944c [] [] }} ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:48.606 [INFO][5530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.094 [INFO][5591] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" HandleID="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.094 [INFO][5591] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" HandleID="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024a6b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-35", "pod":"goldmane-666569f655-qtwrz", "timestamp":"2026-01-23 23:56:49.094655931 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.096 [INFO][5591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.096 [INFO][5591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.096 [INFO][5591] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.122 [INFO][5591] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.138 [INFO][5591] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.149 [INFO][5591] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.156 [INFO][5591] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.163 [INFO][5591] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.164 [INFO][5591] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.168 [INFO][5591] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96 Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.192 [INFO][5591] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.216 [INFO][5591] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.217 [INFO][5591] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" host="ip-172-31-18-35" Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.217 [INFO][5591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:49.412198 containerd[2180]: 2026-01-23 23:56:49.219 [INFO][5591] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" HandleID="k8s-pod-network.90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.255 [INFO][5530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"062e26d7-bfb2-4194-8340-6fddf424a2ce", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"goldmane-666569f655-qtwrz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a2da5c944c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.257 [INFO][5530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.4/32] ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.257 [INFO][5530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4a2da5c944c ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.331 [INFO][5530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.337 [INFO][5530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"062e26d7-bfb2-4194-8340-6fddf424a2ce", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96", Pod:"goldmane-666569f655-qtwrz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a2da5c944c", MAC:"16:59:35:57:9b:51", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:49.414832 containerd[2180]: 2026-01-23 23:56:49.396 [INFO][5530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96" Namespace="calico-system" Pod="goldmane-666569f655-qtwrz" WorkloadEndpoint="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:56:49.626619 kubelet[3714]: I0123 23:56:49.620135 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dxn2k" podStartSLOduration=57.620099706 podStartE2EDuration="57.620099706s" podCreationTimestamp="2026-01-23 23:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:49.548172486 +0000 UTC m=+61.123366097" watchObservedRunningTime="2026-01-23 23:56:49.620099706 +0000 UTC m=+61.195292981" Jan 23 23:56:49.633947 systemd-networkd[1714]: cali71445a8f261: Link UP Jan 23 23:56:49.666190 systemd-networkd[1714]: cali71445a8f261: Gained carrier Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:48.979 [WARNING][5627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e511819f-7fe1-47d1-b5b7-5258bf08f097", ResourceVersion:"1030", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d", Pod:"calico-apiserver-6d5497fbb7-6rwm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb821dde35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:48.981 [INFO][5627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:48.981 [INFO][5627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" iface="eth0" netns="" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:48.984 [INFO][5627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:48.984 [INFO][5627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.180 [INFO][5645] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.180 [INFO][5645] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.494 [INFO][5645] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.584 [WARNING][5645] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.585 [INFO][5645] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.602 [INFO][5645] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:49.733189 containerd[2180]: 2026-01-23 23:56:49.649 [INFO][5627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:49.736600 containerd[2180]: time="2026-01-23T23:56:49.733283539Z" level=info msg="TearDown network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" successfully" Jan 23 23:56:49.736600 containerd[2180]: time="2026-01-23T23:56:49.733340059Z" level=info msg="StopPodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" returns successfully" Jan 23 23:56:49.752137 containerd[2180]: time="2026-01-23T23:56:49.747656419Z" level=info msg="RemovePodSandbox for \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" Jan 23 23:56:49.752137 containerd[2180]: time="2026-01-23T23:56:49.747743755Z" level=info msg="Forcibly stopping sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\"" Jan 23 23:56:49.757603 containerd[2180]: time="2026-01-23T23:56:49.757476055Z" level=info msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" Jan 23 23:56:49.783421 containerd[2180]: time="2026-01-23T23:56:49.757927027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:49.783421 containerd[2180]: time="2026-01-23T23:56:49.758413963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:49.783421 containerd[2180]: time="2026-01-23T23:56:49.758445187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:49.783421 containerd[2180]: time="2026-01-23T23:56:49.759885679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:49.823251 systemd-journald[1627]: Under memory pressure, flushing caches. Jan 23 23:56:49.797496 systemd-resolved[2041]: Under memory pressure, flushing caches. Jan 23 23:56:49.797563 systemd-resolved[2041]: Flushed all caches. Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:48.845 [INFO][5559] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0 calico-kube-controllers-6677f6f656- calico-system 9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2 1045 0 2026-01-23 23:56:24 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6677f6f656 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-35 calico-kube-controllers-6677f6f656-js6vm eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali71445a8f261 [] [] }} ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:48.858 [INFO][5559] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.135 [INFO][5632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" HandleID="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.136 [INFO][5632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" HandleID="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000120190), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-35", "pod":"calico-kube-controllers-6677f6f656-js6vm", "timestamp":"2026-01-23 23:56:49.135118396 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.136 [INFO][5632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.218 [INFO][5632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.221 [INFO][5632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.276 [INFO][5632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.295 [INFO][5632] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.304 [INFO][5632] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.310 [INFO][5632] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.339 [INFO][5632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.339 [INFO][5632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.362 [INFO][5632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.438 [INFO][5632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.492 [INFO][5632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.5/26] block=192.168.59.0/26 handle="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.492 [INFO][5632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.5/26] handle="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" host="ip-172-31-18-35" Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.494 [INFO][5632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:49.937709 containerd[2180]: 2026-01-23 23:56:49.494 [INFO][5632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.5/26] IPv6=[] ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" HandleID="k8s-pod-network.75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.539 [INFO][5559] cni-plugin/k8s.go 418: Populated endpoint ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0", GenerateName:"calico-kube-controllers-6677f6f656-", Namespace:"calico-system", SelfLink:"", UID:"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6677f6f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"calico-kube-controllers-6677f6f656-js6vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71445a8f261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.539 [INFO][5559] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.5/32] ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.540 [INFO][5559] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali71445a8f261 ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.737 [INFO][5559] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.767 [INFO][5559] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0", GenerateName:"calico-kube-controllers-6677f6f656-", Namespace:"calico-system", SelfLink:"", UID:"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6677f6f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a", Pod:"calico-kube-controllers-6677f6f656-js6vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71445a8f261", MAC:"fe:8e:55:86:16:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:49.941918 containerd[2180]: 2026-01-23 23:56:49.825 [INFO][5559] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a" Namespace="calico-system" Pod="calico-kube-controllers-6677f6f656-js6vm" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:56:50.164253 systemd-networkd[1714]: cali0117be6fdcf: Link UP Jan 23 23:56:50.177422 systemd-networkd[1714]: cali0117be6fdcf: Gained carrier Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:48.873 [INFO][5577] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0 coredns-668d6bf9bc- kube-system e5b6c2c1-0276-4f5f-9587-f464f0aab16d 1044 0 2026-01-23 23:55:52 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-35 coredns-668d6bf9bc-9ddld eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0117be6fdcf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:48.881 [INFO][5577] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.267 [INFO][5635] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" HandleID="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.279 [INFO][5635] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" HandleID="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004cac0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-35", "pod":"coredns-668d6bf9bc-9ddld", "timestamp":"2026-01-23 23:56:49.266566984 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.283 [INFO][5635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.623 [INFO][5635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.623 [INFO][5635] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.845 [INFO][5635] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.923 [INFO][5635] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.975 [INFO][5635] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:49.988 [INFO][5635] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.013 [INFO][5635] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.013 [INFO][5635] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.047 [INFO][5635] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.085 [INFO][5635] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.114 [INFO][5635] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.6/26] block=192.168.59.0/26 handle="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.115 [INFO][5635] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.6/26] handle="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" host="ip-172-31-18-35" Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.115 [INFO][5635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:50.338991 containerd[2180]: 2026-01-23 23:56:50.115 [INFO][5635] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.6/26] IPv6=[] ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" HandleID="k8s-pod-network.72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.142 [INFO][5577] cni-plugin/k8s.go 418: Populated endpoint ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5b6c2c1-0276-4f5f-9587-f464f0aab16d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"coredns-668d6bf9bc-9ddld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0117be6fdcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.144 [INFO][5577] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.6/32] ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.146 [INFO][5577] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0117be6fdcf ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.180 [INFO][5577] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.211 [INFO][5577] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5b6c2c1-0276-4f5f-9587-f464f0aab16d", ResourceVersion:"1044", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e", Pod:"coredns-668d6bf9bc-9ddld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0117be6fdcf", MAC:"d2:87:c8:a6:6c:e9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:50.344797 containerd[2180]: 2026-01-23 23:56:50.287 [INFO][5577] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e" Namespace="kube-system" Pod="coredns-668d6bf9bc-9ddld" WorkloadEndpoint="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:56:50.354417 containerd[2180]: time="2026-01-23T23:56:50.352017954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:50.356627 containerd[2180]: time="2026-01-23T23:56:50.356193282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:50.359292 containerd[2180]: time="2026-01-23T23:56:50.356793714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:50.367937 containerd[2180]: time="2026-01-23T23:56:50.364268334Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.238 [INFO][5619] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.244 [INFO][5619] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" iface="eth0" netns="/var/run/netns/cni-7288b7a0-42e8-0fba-dfe0-6e6cfb102062" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.251 [INFO][5619] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" iface="eth0" netns="/var/run/netns/cni-7288b7a0-42e8-0fba-dfe0-6e6cfb102062" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.259 [INFO][5619] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" iface="eth0" netns="/var/run/netns/cni-7288b7a0-42e8-0fba-dfe0-6e6cfb102062" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.259 [INFO][5619] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.259 [INFO][5619] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.815 [INFO][5656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:49.816 [INFO][5656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:50.120 [INFO][5656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:50.219 [WARNING][5656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:50.219 [INFO][5656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:50.257 [INFO][5656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:50.372491 containerd[2180]: 2026-01-23 23:56:50.343 [INFO][5619] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:56:50.386371 containerd[2180]: time="2026-01-23T23:56:50.385781694Z" level=info msg="TearDown network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" successfully" Jan 23 23:56:50.386371 containerd[2180]: time="2026-01-23T23:56:50.385848078Z" level=info msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" returns successfully" Jan 23 23:56:50.390493 systemd[1]: run-netns-cni\x2d7288b7a0\x2d42e8\x2d0fba\x2ddfe0\x2d6e6cfb102062.mount: Deactivated successfully. Jan 23 23:56:50.394071 containerd[2180]: time="2026-01-23T23:56:50.393678222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-md8cr,Uid:ef69f672-ed17-43f4-a4a8-8456f661673c,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:50.395297 containerd[2180]: time="2026-01-23T23:56:50.394793754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-qtwrz,Uid:062e26d7-bfb2-4194-8340-6fddf424a2ce,Namespace:calico-system,Attempt:1,} returns sandbox id \"90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96\"" Jan 23 23:56:50.416956 containerd[2180]: time="2026-01-23T23:56:50.411159414Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:56:50.589614 systemd[1]: Started sshd@8-172.31.18.35:22-4.153.228.146:34144.service - OpenSSH per-connection server daemon (4.153.228.146:34144). Jan 23 23:56:50.605124 containerd[2180]: time="2026-01-23T23:56:50.604918195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:50.615317 containerd[2180]: time="2026-01-23T23:56:50.611938723Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:50.615317 containerd[2180]: time="2026-01-23T23:56:50.611994811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:50.615317 containerd[2180]: time="2026-01-23T23:56:50.612190255Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:50.801323 containerd[2180]: time="2026-01-23T23:56:50.801141656Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:50.812500 containerd[2180]: time="2026-01-23T23:56:50.810436376Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:56:50.813307 containerd[2180]: time="2026-01-23T23:56:50.810921536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:50.813462 kubelet[3714]: E0123 23:56:50.812914 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:50.813462 kubelet[3714]: E0123 23:56:50.812997 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:50.824352 kubelet[3714]: E0123 23:56:50.823651 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdmj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:50.825479 kubelet[3714]: E0123 23:56:50.825366 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:56:50.858546 containerd[2180]: time="2026-01-23T23:56:50.856257692Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6677f6f656-js6vm,Uid:9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a\"" Jan 23 23:56:50.883244 containerd[2180]: time="2026-01-23T23:56:50.883077968Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.540 [INFO][5724] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.549 [INFO][5724] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" iface="eth0" netns="/var/run/netns/cni-455e0739-0559-d3e1-f95e-ea8ab528a264" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.554 [INFO][5724] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" iface="eth0" netns="/var/run/netns/cni-455e0739-0559-d3e1-f95e-ea8ab528a264" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.555 [INFO][5724] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" iface="eth0" netns="/var/run/netns/cni-455e0739-0559-d3e1-f95e-ea8ab528a264" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.556 [INFO][5724] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.556 [INFO][5724] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.786 [INFO][5825] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.788 [INFO][5825] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.793 [INFO][5825] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.859 [WARNING][5825] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.860 [INFO][5825] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.870 [INFO][5825] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:50.924609 containerd[2180]: 2026-01-23 23:56:50.890 [INFO][5724] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:56:50.927403 containerd[2180]: time="2026-01-23T23:56:50.926102061Z" level=info msg="TearDown network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" successfully" Jan 23 23:56:50.927403 containerd[2180]: time="2026-01-23T23:56:50.926181621Z" level=info msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" returns successfully" Jan 23 23:56:50.934137 containerd[2180]: time="2026-01-23T23:56:50.933887277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-xhxnx,Uid:70864b69-f424-425f-943d-f03fcd5d49da,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.457 [WARNING][5722] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"e511819f-7fe1-47d1-b5b7-5258bf08f097", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"825e660a4e4d83eb917fa0d448ec0319cf736a270bace00622d9525652a50b3d", Pod:"calico-apiserver-6d5497fbb7-6rwm7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliadb821dde35", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.457 [INFO][5722] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.457 [INFO][5722] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" iface="eth0" netns="" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.457 [INFO][5722] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.457 [INFO][5722] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.808 [INFO][5805] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.808 [INFO][5805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.874 [INFO][5805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.909 [WARNING][5805] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.909 [INFO][5805] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" HandleID="k8s-pod-network.21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--6rwm7-eth0" Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.919 [INFO][5805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:50.946870 containerd[2180]: 2026-01-23 23:56:50.939 [INFO][5722] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf" Jan 23 23:56:50.949851 containerd[2180]: time="2026-01-23T23:56:50.946886157Z" level=info msg="TearDown network for sandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" successfully" Jan 23 23:56:50.966103 containerd[2180]: time="2026-01-23T23:56:50.965953161Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-9ddld,Uid:e5b6c2c1-0276-4f5f-9587-f464f0aab16d,Namespace:kube-system,Attempt:1,} returns sandbox id \"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e\"" Jan 23 23:56:50.972971 containerd[2180]: time="2026-01-23T23:56:50.972269493Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:50.972971 containerd[2180]: time="2026-01-23T23:56:50.972428385Z" level=info msg="RemovePodSandbox \"21c8ba923b0e1932b0ede743903f168f1ac5e0dd56b4ffee26c5561a946fa2bf\" returns successfully" Jan 23 23:56:50.984506 containerd[2180]: time="2026-01-23T23:56:50.984095877Z" level=info msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" Jan 23 23:56:51.014725 containerd[2180]: time="2026-01-23T23:56:51.014669513Z" level=info msg="CreateContainer within sandbox \"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:56:51.075283 systemd-networkd[1714]: cali4a2da5c944c: Gained IPv6LL Jan 23 23:56:51.103553 containerd[2180]: time="2026-01-23T23:56:51.103276793Z" level=info msg="CreateContainer within sandbox \"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb07c7fcc887c0818118b3dc007113d200c78c7f4f9a39b23fc114968542eb2e\"" Jan 23 23:56:51.106427 containerd[2180]: time="2026-01-23T23:56:51.106120673Z" level=info msg="StartContainer for \"bb07c7fcc887c0818118b3dc007113d200c78c7f4f9a39b23fc114968542eb2e\"" Jan 23 23:56:51.129965 systemd-networkd[1714]: cali8ed232d3950: Link UP Jan 23 23:56:51.132962 systemd-networkd[1714]: cali8ed232d3950: Gained carrier Jan 23 23:56:51.138985 systemd-networkd[1714]: cali71445a8f261: Gained IPv6LL Jan 23 23:56:51.158704 sshd[5833]: Accepted publickey for core from 4.153.228.146 port 34144 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:51.173172 sshd[5833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:51.188017 containerd[2180]: time="2026-01-23T23:56:51.187577622Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:51.192992 systemd-logind[2134]: New session 9 of user core. Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.767 [INFO][5807] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0 csi-node-driver- calico-system ef69f672-ed17-43f4-a4a8-8456f661673c 1061 0 2026-01-23 23:56:24 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-35 csi-node-driver-md8cr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8ed232d3950 [] [] }} ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.767 [INFO][5807] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.967 [INFO][5873] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" HandleID="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.969 [INFO][5873] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" HandleID="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003b21f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-35", "pod":"csi-node-driver-md8cr", "timestamp":"2026-01-23 23:56:50.967674861 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.969 [INFO][5873] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.970 [INFO][5873] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.970 [INFO][5873] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:50.994 [INFO][5873] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.018 [INFO][5873] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.030 [INFO][5873] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.036 [INFO][5873] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.042 [INFO][5873] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.042 [INFO][5873] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.067 [INFO][5873] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4 Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.080 [INFO][5873] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.098 [INFO][5873] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.7/26] block=192.168.59.0/26 handle="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.100 [INFO][5873] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.7/26] handle="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" host="ip-172-31-18-35" Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.100 [INFO][5873] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:51.196521 containerd[2180]: 2026-01-23 23:56:51.100 [INFO][5873] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.7/26] IPv6=[] ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" HandleID="k8s-pod-network.7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.113 [INFO][5807] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef69f672-ed17-43f4-a4a8-8456f661673c", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"csi-node-driver-md8cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed232d3950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.115 [INFO][5807] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.7/32] ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.117 [INFO][5807] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8ed232d3950 ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.134 [INFO][5807] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.138 [INFO][5807] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef69f672-ed17-43f4-a4a8-8456f661673c", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4", Pod:"csi-node-driver-md8cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed232d3950", MAC:"c2:a0:c8:04:75:86", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:51.200780 containerd[2180]: 2026-01-23 23:56:51.168 [INFO][5807] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4" Namespace="calico-system" Pod="csi-node-driver-md8cr" WorkloadEndpoint="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:56:51.200780 containerd[2180]: time="2026-01-23T23:56:51.199078590Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:56:51.200780 containerd[2180]: time="2026-01-23T23:56:51.199267134Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:51.202690 kubelet[3714]: E0123 23:56:51.202002 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:51.202690 kubelet[3714]: E0123 23:56:51.202070 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:51.202690 kubelet[3714]: E0123 23:56:51.202247 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cbxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:51.204013 kubelet[3714]: E0123 23:56:51.203821 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:56:51.205756 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:56:51.397503 kubelet[3714]: E0123 23:56:51.393910 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:56:51.417317 systemd[1]: run-netns-cni\x2d455e0739\x2d0559\x2dd3e1\x2df95e\x2dea8ab528a264.mount: Deactivated successfully. Jan 23 23:56:51.452842 containerd[2180]: time="2026-01-23T23:56:51.451736755Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:51.452842 containerd[2180]: time="2026-01-23T23:56:51.452210635Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:51.454650 containerd[2180]: time="2026-01-23T23:56:51.452687191Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.454650 containerd[2180]: time="2026-01-23T23:56:51.453148975Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:51.503909 kubelet[3714]: E0123 23:56:51.503017 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:56:51.780378 systemd-networkd[1714]: cali0117be6fdcf: Gained IPv6LL Jan 23 23:56:51.822810 systemd-networkd[1714]: cali045614582c9: Link UP Jan 23 23:56:51.832347 systemd-networkd[1714]: cali045614582c9: Gained carrier Jan 23 23:56:51.848112 systemd-resolved[2041]: Under memory pressure, flushing caches. Jan 23 23:56:51.848153 systemd-resolved[2041]: Flushed all caches. Jan 23 23:56:51.855065 systemd-journald[1627]: Under memory pressure, flushing caches. Jan 23 23:56:51.891446 containerd[2180]: time="2026-01-23T23:56:51.888464493Z" level=info msg="StartContainer for \"bb07c7fcc887c0818118b3dc007113d200c78c7f4f9a39b23fc114968542eb2e\" returns successfully" Jan 23 23:56:51.924028 containerd[2180]: time="2026-01-23T23:56:51.923963937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-md8cr,Uid:ef69f672-ed17-43f4-a4a8-8456f661673c,Namespace:calico-system,Attempt:1,} returns sandbox id \"7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4\"" Jan 23 23:56:51.928967 containerd[2180]: time="2026-01-23T23:56:51.928790158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.133 [INFO][5892] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0 calico-apiserver-6d5497fbb7- calico-apiserver 70864b69-f424-425f-943d-f03fcd5d49da 1092 0 2026-01-23 23:56:06 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6d5497fbb7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-35 calico-apiserver-6d5497fbb7-xhxnx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali045614582c9 [] [] }} ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.135 [INFO][5892] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.443 [INFO][5919] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" HandleID="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.443 [INFO][5919] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" HandleID="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000374550), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-35", "pod":"calico-apiserver-6d5497fbb7-xhxnx", "timestamp":"2026-01-23 23:56:51.443248903 +0000 UTC"}, Hostname:"ip-172-31-18-35", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.443 [INFO][5919] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.443 [INFO][5919] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.443 [INFO][5919] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-35' Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.519 [INFO][5919] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.559 [INFO][5919] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.605 [INFO][5919] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.621 [INFO][5919] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.631 [INFO][5919] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.641 [INFO][5919] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.651 [INFO][5919] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.673 [INFO][5919] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.702 [INFO][5919] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.59.8/26] block=192.168.59.0/26 handle="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.702 [INFO][5919] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.8/26] handle="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" host="ip-172-31-18-35" Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.702 [INFO][5919] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:51.966428 containerd[2180]: 2026-01-23 23:56:51.702 [INFO][5919] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.59.8/26] IPv6=[] ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" HandleID="k8s-pod-network.0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.760 [INFO][5892] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70864b69-f424-425f-943d-f03fcd5d49da", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"", Pod:"calico-apiserver-6d5497fbb7-xhxnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali045614582c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.760 [INFO][5892] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.8/32] ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.760 [INFO][5892] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali045614582c9 ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.869 [INFO][5892] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.871 [INFO][5892] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70864b69-f424-425f-943d-f03fcd5d49da", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a", Pod:"calico-apiserver-6d5497fbb7-xhxnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali045614582c9", MAC:"f2:9b:fa:5e:27:70", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:51.970243 containerd[2180]: 2026-01-23 23:56:51.935 [INFO][5892] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a" Namespace="calico-apiserver" Pod="calico-apiserver-6d5497fbb7-xhxnx" WorkloadEndpoint="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:56:52.007769 sshd[5833]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:52.026161 systemd[1]: sshd@8-172.31.18.35:22-4.153.228.146:34144.service: Deactivated successfully. Jan 23 23:56:52.036247 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:56:52.036458 systemd-logind[2134]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:56:52.051336 systemd-logind[2134]: Removed session 9. Jan 23 23:56:52.107653 containerd[2180]: time="2026-01-23T23:56:52.106005894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:52.107653 containerd[2180]: time="2026-01-23T23:56:52.107483070Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:52.109017 containerd[2180]: time="2026-01-23T23:56:52.108769578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:52.111376 containerd[2180]: time="2026-01-23T23:56:52.111091182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:51.449 [WARNING][5907] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:51.474 [INFO][5907] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:51.477 [INFO][5907] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" iface="eth0" netns="" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:51.477 [INFO][5907] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:51.477 [INFO][5907] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.067 [INFO][5989] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.069 [INFO][5989] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.069 [INFO][5989] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.107 [WARNING][5989] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.107 [INFO][5989] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.110 [INFO][5989] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:52.134310 containerd[2180]: 2026-01-23 23:56:52.115 [INFO][5907] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.136789 containerd[2180]: time="2026-01-23T23:56:52.136011103Z" level=info msg="TearDown network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" successfully" Jan 23 23:56:52.136789 containerd[2180]: time="2026-01-23T23:56:52.136054867Z" level=info msg="StopPodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" returns successfully" Jan 23 23:56:52.137746 containerd[2180]: time="2026-01-23T23:56:52.137148535Z" level=info msg="RemovePodSandbox for \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" Jan 23 23:56:52.137746 containerd[2180]: time="2026-01-23T23:56:52.137212063Z" level=info msg="Forcibly stopping sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\"" Jan 23 23:56:52.222993 containerd[2180]: time="2026-01-23T23:56:52.222933511Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:52.226690 containerd[2180]: time="2026-01-23T23:56:52.226618243Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:56:52.226937 containerd[2180]: time="2026-01-23T23:56:52.226823035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:56:52.227261 kubelet[3714]: E0123 23:56:52.227198 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:52.230319 kubelet[3714]: E0123 23:56:52.229353 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:52.230319 kubelet[3714]: E0123 23:56:52.229595 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:52.236001 containerd[2180]: time="2026-01-23T23:56:52.235938787Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:56:52.284068 containerd[2180]: time="2026-01-23T23:56:52.283839559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6d5497fbb7-xhxnx,Uid:70864b69-f424-425f-943d-f03fcd5d49da,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a\"" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.279 [WARNING][6076] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" WorkloadEndpoint="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.279 [INFO][6076] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.279 [INFO][6076] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" iface="eth0" netns="" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.279 [INFO][6076] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.279 [INFO][6076] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.335 [INFO][6101] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.335 [INFO][6101] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.336 [INFO][6101] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.353 [WARNING][6101] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.353 [INFO][6101] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" HandleID="k8s-pod-network.942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Workload="ip--172--31--18--35-k8s-whisker--6db99dc799--kdht2-eth0" Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.356 [INFO][6101] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:52.364254 containerd[2180]: 2026-01-23 23:56:52.360 [INFO][6076] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b" Jan 23 23:56:52.364254 containerd[2180]: time="2026-01-23T23:56:52.364071740Z" level=info msg="TearDown network for sandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" successfully" Jan 23 23:56:52.370738 containerd[2180]: time="2026-01-23T23:56:52.370641848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:52.371315 containerd[2180]: time="2026-01-23T23:56:52.370745420Z" level=info msg="RemovePodSandbox \"942bc2797286849fca7f9065178a86403a5b4c9d1939650c72a90cacb2f0cb2b\" returns successfully" Jan 23 23:56:52.372191 containerd[2180]: time="2026-01-23T23:56:52.371745440Z" level=info msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" Jan 23 23:56:52.518131 containerd[2180]: time="2026-01-23T23:56:52.518043620Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:52.528434 containerd[2180]: time="2026-01-23T23:56:52.522571112Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:56:52.528434 containerd[2180]: time="2026-01-23T23:56:52.522733052Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:56:52.528929 kubelet[3714]: E0123 23:56:52.525681 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:52.528929 kubelet[3714]: E0123 23:56:52.525752 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:52.528929 kubelet[3714]: E0123 23:56:52.526028 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:52.533688 kubelet[3714]: E0123 23:56:52.532733 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:52.534444 containerd[2180]: time="2026-01-23T23:56:52.534149421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:52.564458 kubelet[3714]: E0123 23:56:52.561690 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:56:52.564458 kubelet[3714]: E0123 23:56:52.561883 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:56:52.596425 kubelet[3714]: I0123 23:56:52.595176 3714 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-9ddld" podStartSLOduration=60.595152609 podStartE2EDuration="1m0.595152609s" podCreationTimestamp="2026-01-23 23:55:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:52.591974649 +0000 UTC m=+64.167167960" watchObservedRunningTime="2026-01-23 23:56:52.595152609 +0000 UTC m=+64.170345896" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.442 [WARNING][6115] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b558c76-7f3e-4806-8d02-51d3c08c8f13", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8", Pod:"coredns-668d6bf9bc-dxn2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif665c765f6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.444 [INFO][6115] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.444 [INFO][6115] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" iface="eth0" netns="" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.444 [INFO][6115] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.444 [INFO][6115] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.544 [INFO][6122] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.548 [INFO][6122] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.550 [INFO][6122] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.621 [WARNING][6122] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.622 [INFO][6122] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.635 [INFO][6122] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:52.654798 containerd[2180]: 2026-01-23 23:56:52.643 [INFO][6115] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:52.654798 containerd[2180]: time="2026-01-23T23:56:52.654592905Z" level=info msg="TearDown network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" successfully" Jan 23 23:56:52.654798 containerd[2180]: time="2026-01-23T23:56:52.654630441Z" level=info msg="StopPodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" returns successfully" Jan 23 23:56:52.659060 containerd[2180]: time="2026-01-23T23:56:52.656614557Z" level=info msg="RemovePodSandbox for \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" Jan 23 23:56:52.659060 containerd[2180]: time="2026-01-23T23:56:52.656765685Z" level=info msg="Forcibly stopping sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\"" Jan 23 23:56:52.834677 containerd[2180]: time="2026-01-23T23:56:52.833611186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:52.839692 containerd[2180]: time="2026-01-23T23:56:52.839530678Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:52.839692 containerd[2180]: time="2026-01-23T23:56:52.839685250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:52.842928 kubelet[3714]: E0123 23:56:52.840564 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:52.842928 kubelet[3714]: E0123 23:56:52.840639 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:52.842928 kubelet[3714]: E0123 23:56:52.840829 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lv6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:52.846274 kubelet[3714]: E0123 23:56:52.843505 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.832 [WARNING][6136] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"8b558c76-7f3e-4806-8d02-51d3c08c8f13", ResourceVersion:"1076", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"9383b2dd125267c0416abeb6b6bda03d505fe0db72aaacd2e24de03cb0d5e4b8", Pod:"coredns-668d6bf9bc-dxn2k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif665c765f6d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.833 [INFO][6136] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.833 [INFO][6136] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" iface="eth0" netns="" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.833 [INFO][6136] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.833 [INFO][6136] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.962 [INFO][6147] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.963 [INFO][6147] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.963 [INFO][6147] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.997 [WARNING][6147] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:52.997 [INFO][6147] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" HandleID="k8s-pod-network.bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--dxn2k-eth0" Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:53.001 [INFO][6147] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:53.013130 containerd[2180]: 2026-01-23 23:56:53.006 [INFO][6136] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c" Jan 23 23:56:53.016821 containerd[2180]: time="2026-01-23T23:56:53.015555595Z" level=info msg="TearDown network for sandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" successfully" Jan 23 23:56:53.026748 containerd[2180]: time="2026-01-23T23:56:53.026320651Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:53.026748 containerd[2180]: time="2026-01-23T23:56:53.026463055Z" level=info msg="RemovePodSandbox \"bd9262b04844d47af61b7e0606d8f874b011f8d4db1ba74f53cc88e0b7d3616c\" returns successfully" Jan 23 23:56:53.185620 systemd-networkd[1714]: cali8ed232d3950: Gained IPv6LL Jan 23 23:56:53.505637 systemd-networkd[1714]: cali045614582c9: Gained IPv6LL Jan 23 23:56:53.555419 kubelet[3714]: E0123 23:56:53.555317 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:56:53.562759 kubelet[3714]: E0123 23:56:53.562686 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:56:56.195123 ntpd[2118]: Listen normally on 6 vxlan.calico 192.168.59.0:123 Jan 23 23:56:56.195262 ntpd[2118]: Listen normally on 7 cali7ebff8b16b3 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 6 vxlan.calico 192.168.59.0:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 7 cali7ebff8b16b3 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 8 vxlan.calico [fe80::64e7:c3ff:fee4:6ee2%5]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 9 caliadb821dde35 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 10 calif665c765f6d [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 11 cali4a2da5c944c [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 12 cali71445a8f261 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 13 cali0117be6fdcf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 14 cali8ed232d3950 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:56:56.195939 ntpd[2118]: 23 Jan 23:56:56 ntpd[2118]: Listen normally on 15 cali045614582c9 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:56:56.195347 ntpd[2118]: Listen normally on 8 vxlan.calico [fe80::64e7:c3ff:fee4:6ee2%5]:123 Jan 23 23:56:56.195471 ntpd[2118]: Listen normally on 9 caliadb821dde35 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 23:56:56.195546 ntpd[2118]: Listen normally on 10 calif665c765f6d [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 23:56:56.195618 ntpd[2118]: Listen normally on 11 cali4a2da5c944c [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:56:56.195695 ntpd[2118]: Listen normally on 12 cali71445a8f261 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:56:56.195768 ntpd[2118]: Listen normally on 13 cali0117be6fdcf [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:56:56.195838 ntpd[2118]: Listen normally on 14 cali8ed232d3950 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:56:56.195907 ntpd[2118]: Listen normally on 15 cali045614582c9 [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:56:57.100735 systemd[1]: Started sshd@9-172.31.18.35:22-4.153.228.146:46838.service - OpenSSH per-connection server daemon (4.153.228.146:46838). Jan 23 23:56:57.645382 sshd[6169]: Accepted publickey for core from 4.153.228.146 port 46838 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:57.648211 sshd[6169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:57.656765 systemd-logind[2134]: New session 10 of user core. Jan 23 23:56:57.663086 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:56:58.154047 sshd[6169]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:58.162269 systemd[1]: sshd@9-172.31.18.35:22-4.153.228.146:46838.service: Deactivated successfully. Jan 23 23:56:58.168150 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:56:58.168849 systemd-logind[2134]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:56:58.172781 systemd-logind[2134]: Removed session 10. Jan 23 23:56:58.233662 systemd[1]: Started sshd@10-172.31.18.35:22-4.153.228.146:46850.service - OpenSSH per-connection server daemon (4.153.228.146:46850). Jan 23 23:56:58.731287 containerd[2180]: time="2026-01-23T23:56:58.729652095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:56:58.734433 sshd[6186]: Accepted publickey for core from 4.153.228.146 port 46850 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:58.739562 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:58.757797 systemd-logind[2134]: New session 11 of user core. Jan 23 23:56:58.763091 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:59.005046 containerd[2180]: time="2026-01-23T23:56:59.004882537Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:59.007771 containerd[2180]: time="2026-01-23T23:56:59.007595269Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:56:59.008879 containerd[2180]: time="2026-01-23T23:56:59.007638337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:56:59.010297 kubelet[3714]: E0123 23:56:59.008174 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:59.010297 kubelet[3714]: E0123 23:56:59.008245 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:59.010297 kubelet[3714]: E0123 23:56:59.008436 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be7b206d06df401fa8cc56417b3a1000,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:59.014812 containerd[2180]: time="2026-01-23T23:56:59.014644825Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:56:59.299214 containerd[2180]: time="2026-01-23T23:56:59.298637834Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:59.300969 containerd[2180]: time="2026-01-23T23:56:59.300897266Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:56:59.301087 containerd[2180]: time="2026-01-23T23:56:59.301037186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:59.301300 kubelet[3714]: E0123 23:56:59.301239 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:59.301424 kubelet[3714]: E0123 23:56:59.301315 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:59.302967 kubelet[3714]: E0123 23:56:59.301560 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:59.303357 kubelet[3714]: E0123 23:56:59.303299 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:56:59.309033 sshd[6186]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:59.321447 systemd[1]: sshd@10-172.31.18.35:22-4.153.228.146:46850.service: Deactivated successfully. Jan 23 23:56:59.330666 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:59.332418 systemd-logind[2134]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:59.337663 systemd-logind[2134]: Removed session 11. Jan 23 23:56:59.400097 systemd[1]: Started sshd@11-172.31.18.35:22-4.153.228.146:46862.service - OpenSSH per-connection server daemon (4.153.228.146:46862). Jan 23 23:56:59.909158 sshd[6197]: Accepted publickey for core from 4.153.228.146 port 46862 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:59.911882 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:59.921007 systemd-logind[2134]: New session 12 of user core. Jan 23 23:56:59.925977 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:57:00.380869 sshd[6197]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:00.389109 systemd[1]: sshd@11-172.31.18.35:22-4.153.228.146:46862.service: Deactivated successfully. Jan 23 23:57:00.395282 systemd-logind[2134]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:57:00.396959 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:57:00.400475 systemd-logind[2134]: Removed session 12. Jan 23 23:57:03.724478 containerd[2180]: time="2026-01-23T23:57:03.723724892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:03.974750 containerd[2180]: time="2026-01-23T23:57:03.974560137Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:03.977070 containerd[2180]: time="2026-01-23T23:57:03.976939581Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:03.977316 containerd[2180]: time="2026-01-23T23:57:03.977030277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:03.977525 kubelet[3714]: E0123 23:57:03.977357 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:03.977525 kubelet[3714]: E0123 23:57:03.977442 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:03.978376 kubelet[3714]: E0123 23:57:03.977685 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdmj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:03.979299 kubelet[3714]: E0123 23:57:03.979233 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:57:04.727183 containerd[2180]: time="2026-01-23T23:57:04.726647673Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:05.179509 containerd[2180]: time="2026-01-23T23:57:05.179365027Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:05.181972 containerd[2180]: time="2026-01-23T23:57:05.181830259Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:05.181972 containerd[2180]: time="2026-01-23T23:57:05.181935595Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:05.182234 kubelet[3714]: E0123 23:57:05.182137 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:05.182234 kubelet[3714]: E0123 23:57:05.182204 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:05.182889 kubelet[3714]: E0123 23:57:05.182371 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqd4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:05.184321 kubelet[3714]: E0123 23:57:05.184261 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:57:05.466941 systemd[1]: Started sshd@12-172.31.18.35:22-4.153.228.146:58436.service - OpenSSH per-connection server daemon (4.153.228.146:58436). Jan 23 23:57:05.730099 containerd[2180]: time="2026-01-23T23:57:05.728647366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:05.970926 sshd[6217]: Accepted publickey for core from 4.153.228.146 port 58436 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:05.973705 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:05.983003 systemd-logind[2134]: New session 13 of user core. Jan 23 23:57:05.993487 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:57:06.029617 containerd[2180]: time="2026-01-23T23:57:06.029520416Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:06.031891 containerd[2180]: time="2026-01-23T23:57:06.031789448Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:06.032111 containerd[2180]: time="2026-01-23T23:57:06.031813088Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:06.032274 kubelet[3714]: E0123 23:57:06.032201 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:06.032351 kubelet[3714]: E0123 23:57:06.032299 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:06.032938 kubelet[3714]: E0123 23:57:06.032827 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lv6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:06.033796 containerd[2180]: time="2026-01-23T23:57:06.033732296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:06.034360 kubelet[3714]: E0123 23:57:06.034303 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:57:06.329020 containerd[2180]: time="2026-01-23T23:57:06.328837461Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:06.332206 containerd[2180]: time="2026-01-23T23:57:06.332096157Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:06.332544 containerd[2180]: time="2026-01-23T23:57:06.332117061Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:06.332876 kubelet[3714]: E0123 23:57:06.332556 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:06.332876 kubelet[3714]: E0123 23:57:06.332699 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:06.334749 kubelet[3714]: E0123 23:57:06.332948 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:06.341777 containerd[2180]: time="2026-01-23T23:57:06.341455965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:06.475624 sshd[6217]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:06.481919 systemd[1]: sshd@12-172.31.18.35:22-4.153.228.146:58436.service: Deactivated successfully. Jan 23 23:57:06.489784 systemd-logind[2134]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:57:06.491046 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:57:06.495844 systemd-logind[2134]: Removed session 13. Jan 23 23:57:06.620007 containerd[2180]: time="2026-01-23T23:57:06.619795774Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:06.622111 containerd[2180]: time="2026-01-23T23:57:06.621974087Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:06.622111 containerd[2180]: time="2026-01-23T23:57:06.622060487Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:06.622356 kubelet[3714]: E0123 23:57:06.622298 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:06.622461 kubelet[3714]: E0123 23:57:06.622413 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:06.622659 kubelet[3714]: E0123 23:57:06.622583 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:06.624373 kubelet[3714]: E0123 23:57:06.624285 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:57:06.724306 containerd[2180]: time="2026-01-23T23:57:06.724111427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:06.974870 containerd[2180]: time="2026-01-23T23:57:06.974685432Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:06.978332 containerd[2180]: time="2026-01-23T23:57:06.978218964Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:06.978653 containerd[2180]: time="2026-01-23T23:57:06.978270516Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:06.978934 kubelet[3714]: E0123 23:57:06.978838 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:06.979145 kubelet[3714]: E0123 23:57:06.978935 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:06.979212 kubelet[3714]: E0123 23:57:06.979114 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cbxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:06.980303 kubelet[3714]: E0123 23:57:06.980245 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:57:11.573972 systemd[1]: Started sshd@13-172.31.18.35:22-4.153.228.146:58438.service - OpenSSH per-connection server daemon (4.153.228.146:58438). Jan 23 23:57:11.733449 kubelet[3714]: E0123 23:57:11.731750 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:57:12.119107 sshd[6236]: Accepted publickey for core from 4.153.228.146 port 58438 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:12.122088 sshd[6236]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:12.129781 systemd-logind[2134]: New session 14 of user core. Jan 23 23:57:12.139085 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:57:12.777611 sshd[6236]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:12.790138 systemd[1]: sshd@13-172.31.18.35:22-4.153.228.146:58438.service: Deactivated successfully. Jan 23 23:57:12.801486 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:57:12.804334 systemd-logind[2134]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:57:12.807527 systemd-logind[2134]: Removed session 14. Jan 23 23:57:15.724472 kubelet[3714]: E0123 23:57:15.724311 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:57:16.724145 kubelet[3714]: E0123 23:57:16.723598 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:57:17.725150 kubelet[3714]: E0123 23:57:17.725054 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:57:17.875019 systemd[1]: Started sshd@14-172.31.18.35:22-4.153.228.146:58972.service - OpenSSH per-connection server daemon (4.153.228.146:58972). Jan 23 23:57:18.422053 sshd[6273]: Accepted publickey for core from 4.153.228.146 port 58972 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:18.425546 sshd[6273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:18.435535 systemd-logind[2134]: New session 15 of user core. Jan 23 23:57:18.440983 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:57:18.729415 kubelet[3714]: E0123 23:57:18.728275 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:57:18.982107 sshd[6273]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:18.993031 systemd-logind[2134]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:57:18.995299 systemd[1]: sshd@14-172.31.18.35:22-4.153.228.146:58972.service: Deactivated successfully. Jan 23 23:57:19.004121 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:57:19.011167 systemd-logind[2134]: Removed session 15. Jan 23 23:57:19.067018 systemd[1]: Started sshd@15-172.31.18.35:22-4.153.228.146:58976.service - OpenSSH per-connection server daemon (4.153.228.146:58976). Jan 23 23:57:19.572545 sshd[6287]: Accepted publickey for core from 4.153.228.146 port 58976 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:19.576244 sshd[6287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:19.587411 systemd-logind[2134]: New session 16 of user core. Jan 23 23:57:19.597024 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:57:20.421971 sshd[6287]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:20.435348 systemd[1]: sshd@15-172.31.18.35:22-4.153.228.146:58976.service: Deactivated successfully. Jan 23 23:57:20.445363 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:57:20.451274 systemd-logind[2134]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:57:20.454210 systemd-logind[2134]: Removed session 16. Jan 23 23:57:20.524525 systemd[1]: Started sshd@16-172.31.18.35:22-4.153.228.146:58978.service - OpenSSH per-connection server daemon (4.153.228.146:58978). Jan 23 23:57:21.094937 sshd[6299]: Accepted publickey for core from 4.153.228.146 port 58978 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:21.099242 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:21.117066 systemd-logind[2134]: New session 17 of user core. Jan 23 23:57:21.128119 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:57:21.723935 kubelet[3714]: E0123 23:57:21.723006 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:57:22.867857 sshd[6299]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:22.880141 systemd[1]: sshd@16-172.31.18.35:22-4.153.228.146:58978.service: Deactivated successfully. Jan 23 23:57:22.897974 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:57:22.906713 systemd-logind[2134]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:57:22.911744 systemd-logind[2134]: Removed session 17. Jan 23 23:57:22.967094 systemd[1]: Started sshd@17-172.31.18.35:22-4.153.228.146:58982.service - OpenSSH per-connection server daemon (4.153.228.146:58982). Jan 23 23:57:23.517208 sshd[6322]: Accepted publickey for core from 4.153.228.146 port 58982 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:23.525832 sshd[6322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:23.549855 systemd-logind[2134]: New session 18 of user core. Jan 23 23:57:23.557021 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:57:24.411864 sshd[6322]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:24.420976 systemd[1]: sshd@17-172.31.18.35:22-4.153.228.146:58982.service: Deactivated successfully. Jan 23 23:57:24.435652 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:57:24.442151 systemd-logind[2134]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:57:24.447791 systemd-logind[2134]: Removed session 18. Jan 23 23:57:24.508909 systemd[1]: Started sshd@18-172.31.18.35:22-4.153.228.146:58986.service - OpenSSH per-connection server daemon (4.153.228.146:58986). Jan 23 23:57:25.101919 sshd[6336]: Accepted publickey for core from 4.153.228.146 port 58986 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:25.103763 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:25.119764 systemd-logind[2134]: New session 19 of user core. Jan 23 23:57:25.127449 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:57:25.651528 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:25.666445 systemd[1]: sshd@18-172.31.18.35:22-4.153.228.146:58986.service: Deactivated successfully. Jan 23 23:57:25.682107 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:57:25.686009 systemd-logind[2134]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:57:25.690779 systemd-logind[2134]: Removed session 19. Jan 23 23:57:25.725247 containerd[2180]: time="2026-01-23T23:57:25.725154281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:26.178469 containerd[2180]: time="2026-01-23T23:57:26.178299496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:26.181743 containerd[2180]: time="2026-01-23T23:57:26.181579828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:26.181743 containerd[2180]: time="2026-01-23T23:57:26.181685704Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:26.184182 kubelet[3714]: E0123 23:57:26.182299 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:26.184182 kubelet[3714]: E0123 23:57:26.182419 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:26.184182 kubelet[3714]: E0123 23:57:26.182567 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be7b206d06df401fa8cc56417b3a1000,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:26.188422 containerd[2180]: time="2026-01-23T23:57:26.188327068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:26.463003 containerd[2180]: time="2026-01-23T23:57:26.462792269Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:26.467111 containerd[2180]: time="2026-01-23T23:57:26.465996053Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:26.467111 containerd[2180]: time="2026-01-23T23:57:26.466083821Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:26.467348 kubelet[3714]: E0123 23:57:26.466308 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:26.467348 kubelet[3714]: E0123 23:57:26.466415 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:26.467348 kubelet[3714]: E0123 23:57:26.466574 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:26.467826 kubelet[3714]: E0123 23:57:26.467774 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:57:26.729485 containerd[2180]: time="2026-01-23T23:57:26.729298206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:27.001129 containerd[2180]: time="2026-01-23T23:57:27.000943120Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:27.003277 containerd[2180]: time="2026-01-23T23:57:27.003211660Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:27.003468 containerd[2180]: time="2026-01-23T23:57:27.003353476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:27.003819 kubelet[3714]: E0123 23:57:27.003734 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:27.003969 kubelet[3714]: E0123 23:57:27.003819 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:27.006335 kubelet[3714]: E0123 23:57:27.006158 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdmj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:27.008034 kubelet[3714]: E0123 23:57:27.007946 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:57:28.727315 containerd[2180]: time="2026-01-23T23:57:28.727193444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:29.020427 containerd[2180]: time="2026-01-23T23:57:29.020052978Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:29.022495 containerd[2180]: time="2026-01-23T23:57:29.022272486Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:29.022495 containerd[2180]: time="2026-01-23T23:57:29.022457370Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:29.022926 kubelet[3714]: E0123 23:57:29.022814 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:29.022926 kubelet[3714]: E0123 23:57:29.022881 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:29.025281 kubelet[3714]: E0123 23:57:29.023687 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:29.029999 containerd[2180]: time="2026-01-23T23:57:29.029544534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:29.298912 containerd[2180]: time="2026-01-23T23:57:29.298608271Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:29.303257 containerd[2180]: time="2026-01-23T23:57:29.300993415Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:29.303257 containerd[2180]: time="2026-01-23T23:57:29.301152955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:29.303517 kubelet[3714]: E0123 23:57:29.301332 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:29.303517 kubelet[3714]: E0123 23:57:29.301443 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:29.303517 kubelet[3714]: E0123 23:57:29.301607 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:29.303517 kubelet[3714]: E0123 23:57:29.303074 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:57:30.728440 containerd[2180]: time="2026-01-23T23:57:30.728026594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:30.747145 systemd[1]: Started sshd@19-172.31.18.35:22-4.153.228.146:53206.service - OpenSSH per-connection server daemon (4.153.228.146:53206). Jan 23 23:57:31.049542 containerd[2180]: time="2026-01-23T23:57:31.048752816Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:31.051418 containerd[2180]: time="2026-01-23T23:57:31.051265712Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:31.051658 containerd[2180]: time="2026-01-23T23:57:31.051590156Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:31.053420 kubelet[3714]: E0123 23:57:31.052685 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:31.053420 kubelet[3714]: E0123 23:57:31.052792 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:31.053420 kubelet[3714]: E0123 23:57:31.053037 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqd4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:31.057258 kubelet[3714]: E0123 23:57:31.056481 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:57:31.332927 sshd[6358]: Accepted publickey for core from 4.153.228.146 port 53206 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:31.336674 sshd[6358]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:31.345752 systemd-logind[2134]: New session 20 of user core. Jan 23 23:57:31.354025 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:57:31.864631 sshd[6358]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:31.876141 systemd[1]: sshd@19-172.31.18.35:22-4.153.228.146:53206.service: Deactivated successfully. Jan 23 23:57:31.898310 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:57:31.898423 systemd-logind[2134]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:57:31.905208 systemd-logind[2134]: Removed session 20. Jan 23 23:57:32.737427 containerd[2180]: time="2026-01-23T23:57:32.735806784Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:33.014958 containerd[2180]: time="2026-01-23T23:57:33.014786278Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:33.017347 containerd[2180]: time="2026-01-23T23:57:33.017193010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:33.017347 containerd[2180]: time="2026-01-23T23:57:33.017295382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:33.017632 kubelet[3714]: E0123 23:57:33.017562 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:33.018192 kubelet[3714]: E0123 23:57:33.017647 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:33.018192 kubelet[3714]: E0123 23:57:33.017820 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cbxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:33.019556 kubelet[3714]: E0123 23:57:33.019475 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:57:35.726289 containerd[2180]: time="2026-01-23T23:57:35.726209139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:35.988449 containerd[2180]: time="2026-01-23T23:57:35.988243960Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:35.990606 containerd[2180]: time="2026-01-23T23:57:35.990530308Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:35.990793 containerd[2180]: time="2026-01-23T23:57:35.990683296Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:35.991547 kubelet[3714]: E0123 23:57:35.991090 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:35.991547 kubelet[3714]: E0123 23:57:35.991205 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:35.994145 kubelet[3714]: E0123 23:57:35.993954 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lv6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:35.995720 kubelet[3714]: E0123 23:57:35.995490 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:57:36.950891 systemd[1]: Started sshd@20-172.31.18.35:22-4.153.228.146:58188.service - OpenSSH per-connection server daemon (4.153.228.146:58188). Jan 23 23:57:37.455459 sshd[6375]: Accepted publickey for core from 4.153.228.146 port 58188 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:37.457486 sshd[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:37.467474 systemd-logind[2134]: New session 21 of user core. Jan 23 23:57:37.480154 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:57:37.986048 sshd[6375]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:37.995796 systemd[1]: sshd@20-172.31.18.35:22-4.153.228.146:58188.service: Deactivated successfully. Jan 23 23:57:38.005287 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:57:38.007253 systemd-logind[2134]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:57:38.013205 systemd-logind[2134]: Removed session 21. Jan 23 23:57:39.727432 kubelet[3714]: E0123 23:57:39.725264 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:57:39.729067 kubelet[3714]: E0123 23:57:39.728744 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:57:40.729671 kubelet[3714]: E0123 23:57:40.729571 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:57:41.724330 kubelet[3714]: E0123 23:57:41.724252 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:57:43.097793 systemd[1]: Started sshd@21-172.31.18.35:22-4.153.228.146:58194.service - OpenSSH per-connection server daemon (4.153.228.146:58194). Jan 23 23:57:43.668261 sshd[6390]: Accepted publickey for core from 4.153.228.146 port 58194 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:43.672676 sshd[6390]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:43.687181 systemd-logind[2134]: New session 22 of user core. Jan 23 23:57:43.696654 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:57:44.223902 sshd[6390]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:44.241151 systemd[1]: sshd@21-172.31.18.35:22-4.153.228.146:58194.service: Deactivated successfully. Jan 23 23:57:44.254641 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:57:44.257104 systemd-logind[2134]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:57:44.261315 systemd-logind[2134]: Removed session 22. Jan 23 23:57:44.728428 kubelet[3714]: E0123 23:57:44.728076 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:57:47.723940 kubelet[3714]: E0123 23:57:47.723837 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:57:49.324018 systemd[1]: Started sshd@22-172.31.18.35:22-4.153.228.146:53700.service - OpenSSH per-connection server daemon (4.153.228.146:53700). Jan 23 23:57:49.897438 sshd[6429]: Accepted publickey for core from 4.153.228.146 port 53700 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:49.899701 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:49.916353 systemd-logind[2134]: New session 23 of user core. Jan 23 23:57:49.927001 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:57:50.478118 sshd[6429]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:50.485727 systemd[1]: sshd@22-172.31.18.35:22-4.153.228.146:53700.service: Deactivated successfully. Jan 23 23:57:50.486784 systemd-logind[2134]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:57:50.501164 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:57:50.507178 systemd-logind[2134]: Removed session 23. Jan 23 23:57:51.728451 kubelet[3714]: E0123 23:57:51.728173 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:57:53.038418 containerd[2180]: time="2026-01-23T23:57:53.038330153Z" level=info msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.128 [WARNING][6453] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0", GenerateName:"calico-kube-controllers-6677f6f656-", Namespace:"calico-system", SelfLink:"", UID:"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2", ResourceVersion:"1488", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6677f6f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a", Pod:"calico-kube-controllers-6677f6f656-js6vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71445a8f261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.129 [INFO][6453] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.129 [INFO][6453] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" iface="eth0" netns="" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.129 [INFO][6453] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.129 [INFO][6453] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.196 [INFO][6461] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.196 [INFO][6461] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.196 [INFO][6461] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.216 [WARNING][6461] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.216 [INFO][6461] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.219 [INFO][6461] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:53.232757 containerd[2180]: 2026-01-23 23:57:53.226 [INFO][6453] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.238824 containerd[2180]: time="2026-01-23T23:57:53.233017470Z" level=info msg="TearDown network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" successfully" Jan 23 23:57:53.238824 containerd[2180]: time="2026-01-23T23:57:53.233075730Z" level=info msg="StopPodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" returns successfully" Jan 23 23:57:53.238824 containerd[2180]: time="2026-01-23T23:57:53.234933174Z" level=info msg="RemovePodSandbox for \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" Jan 23 23:57:53.238824 containerd[2180]: time="2026-01-23T23:57:53.235204746Z" level=info msg="Forcibly stopping sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\"" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.344 [WARNING][6475] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0", GenerateName:"calico-kube-controllers-6677f6f656-", Namespace:"calico-system", SelfLink:"", UID:"9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2", ResourceVersion:"1488", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6677f6f656", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"75bccdc88e4c1d5a671333dc0da877e90b8a03d74d807c87712f2c7ce44ee71a", Pod:"calico-kube-controllers-6677f6f656-js6vm", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali71445a8f261", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.345 [INFO][6475] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.345 [INFO][6475] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" iface="eth0" netns="" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.345 [INFO][6475] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.345 [INFO][6475] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.397 [INFO][6482] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.398 [INFO][6482] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.398 [INFO][6482] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.417 [WARNING][6482] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.419 [INFO][6482] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" HandleID="k8s-pod-network.3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Workload="ip--172--31--18--35-k8s-calico--kube--controllers--6677f6f656--js6vm-eth0" Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.424 [INFO][6482] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:53.434304 containerd[2180]: 2026-01-23 23:57:53.428 [INFO][6475] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b" Jan 23 23:57:53.434304 containerd[2180]: time="2026-01-23T23:57:53.432658531Z" level=info msg="TearDown network for sandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" successfully" Jan 23 23:57:53.441418 containerd[2180]: time="2026-01-23T23:57:53.440422303Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:53.441821 containerd[2180]: time="2026-01-23T23:57:53.441659683Z" level=info msg="RemovePodSandbox \"3633649d824cf5324a8138918f4a706e534cd0d971fc1fb763f3a21947fae32b\" returns successfully" Jan 23 23:57:53.443415 containerd[2180]: time="2026-01-23T23:57:53.442650259Z" level=info msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.525 [WARNING][6496] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef69f672-ed17-43f4-a4a8-8456f661673c", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4", Pod:"csi-node-driver-md8cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed232d3950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.525 [INFO][6496] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.525 [INFO][6496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" iface="eth0" netns="" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.525 [INFO][6496] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.525 [INFO][6496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.583 [INFO][6504] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.583 [INFO][6504] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.584 [INFO][6504] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.602 [WARNING][6504] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.602 [INFO][6504] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.605 [INFO][6504] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:53.617523 containerd[2180]: 2026-01-23 23:57:53.614 [INFO][6496] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.619763 containerd[2180]: time="2026-01-23T23:57:53.617598764Z" level=info msg="TearDown network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" successfully" Jan 23 23:57:53.619763 containerd[2180]: time="2026-01-23T23:57:53.617638304Z" level=info msg="StopPodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" returns successfully" Jan 23 23:57:53.619763 containerd[2180]: time="2026-01-23T23:57:53.618639860Z" level=info msg="RemovePodSandbox for \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" Jan 23 23:57:53.619763 containerd[2180]: time="2026-01-23T23:57:53.618688580Z" level=info msg="Forcibly stopping sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\"" Jan 23 23:57:53.733648 kubelet[3714]: E0123 23:57:53.730003 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:57:53.747555 kubelet[3714]: E0123 23:57:53.744727 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.705 [WARNING][6519] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ef69f672-ed17-43f4-a4a8-8456f661673c", ResourceVersion:"1467", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"7f40ac6fdc2a993871243b34973dcc701df18b23e7ce09c58ab5c9fec6a79fc4", Pod:"csi-node-driver-md8cr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8ed232d3950", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.706 [INFO][6519] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.706 [INFO][6519] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" iface="eth0" netns="" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.706 [INFO][6519] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.706 [INFO][6519] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.865 [INFO][6526] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.866 [INFO][6526] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.867 [INFO][6526] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.881 [WARNING][6526] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.881 [INFO][6526] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" HandleID="k8s-pod-network.064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Workload="ip--172--31--18--35-k8s-csi--node--driver--md8cr-eth0" Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.883 [INFO][6526] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:53.901418 containerd[2180]: 2026-01-23 23:57:53.891 [INFO][6519] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6" Jan 23 23:57:53.901418 containerd[2180]: time="2026-01-23T23:57:53.898429413Z" level=info msg="TearDown network for sandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" successfully" Jan 23 23:57:53.906418 containerd[2180]: time="2026-01-23T23:57:53.905999361Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:53.906418 containerd[2180]: time="2026-01-23T23:57:53.906103089Z" level=info msg="RemovePodSandbox \"064d8c636ad56a80a9a073ecac847a01b554b3f927c20f2f560a20ada9a93bb6\" returns successfully" Jan 23 23:57:53.908419 containerd[2180]: time="2026-01-23T23:57:53.907116693Z" level=info msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.066 [WARNING][6542] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"062e26d7-bfb2-4194-8340-6fddf424a2ce", ResourceVersion:"1516", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96", Pod:"goldmane-666569f655-qtwrz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a2da5c944c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.066 [INFO][6542] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.066 [INFO][6542] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" iface="eth0" netns="" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.066 [INFO][6542] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.066 [INFO][6542] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.189 [INFO][6549] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.189 [INFO][6549] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.189 [INFO][6549] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.216 [WARNING][6549] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.216 [INFO][6549] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.220 [INFO][6549] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:54.229894 containerd[2180]: 2026-01-23 23:57:54.225 [INFO][6542] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.231440 containerd[2180]: time="2026-01-23T23:57:54.230226811Z" level=info msg="TearDown network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" successfully" Jan 23 23:57:54.231440 containerd[2180]: time="2026-01-23T23:57:54.230272027Z" level=info msg="StopPodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" returns successfully" Jan 23 23:57:54.232067 containerd[2180]: time="2026-01-23T23:57:54.231998887Z" level=info msg="RemovePodSandbox for \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" Jan 23 23:57:54.232067 containerd[2180]: time="2026-01-23T23:57:54.232060219Z" level=info msg="Forcibly stopping sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\"" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.313 [WARNING][6563] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"062e26d7-bfb2-4194-8340-6fddf424a2ce", ResourceVersion:"1516", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"90c2704a5b2275ffc9542387820131b69362d6258c5698231f1d492af9e78f96", Pod:"goldmane-666569f655-qtwrz", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4a2da5c944c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.313 [INFO][6563] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.313 [INFO][6563] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" iface="eth0" netns="" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.313 [INFO][6563] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.313 [INFO][6563] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.358 [INFO][6570] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.359 [INFO][6570] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.359 [INFO][6570] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.374 [WARNING][6570] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.374 [INFO][6570] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" HandleID="k8s-pod-network.47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Workload="ip--172--31--18--35-k8s-goldmane--666569f655--qtwrz-eth0" Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.378 [INFO][6570] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:54.388616 containerd[2180]: 2026-01-23 23:57:54.384 [INFO][6563] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785" Jan 23 23:57:54.390446 containerd[2180]: time="2026-01-23T23:57:54.388798028Z" level=info msg="TearDown network for sandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" successfully" Jan 23 23:57:54.399379 containerd[2180]: time="2026-01-23T23:57:54.398006672Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:54.399379 containerd[2180]: time="2026-01-23T23:57:54.398186720Z" level=info msg="RemovePodSandbox \"47de4c97dd5e95e1923142b90238488104653f386cd06143598cccb9a3020785\" returns successfully" Jan 23 23:57:54.399379 containerd[2180]: time="2026-01-23T23:57:54.398901608Z" level=info msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.477 [WARNING][6584] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5b6c2c1-0276-4f5f-9587-f464f0aab16d", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e", Pod:"coredns-668d6bf9bc-9ddld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0117be6fdcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.480 [INFO][6584] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.480 [INFO][6584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" iface="eth0" netns="" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.480 [INFO][6584] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.480 [INFO][6584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.530 [INFO][6591] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.530 [INFO][6591] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.530 [INFO][6591] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.543 [WARNING][6591] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.543 [INFO][6591] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.546 [INFO][6591] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:54.555469 containerd[2180]: 2026-01-23 23:57:54.550 [INFO][6584] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.558878 containerd[2180]: time="2026-01-23T23:57:54.557548749Z" level=info msg="TearDown network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" successfully" Jan 23 23:57:54.558878 containerd[2180]: time="2026-01-23T23:57:54.557610465Z" level=info msg="StopPodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" returns successfully" Jan 23 23:57:54.558878 containerd[2180]: time="2026-01-23T23:57:54.558684573Z" level=info msg="RemovePodSandbox for \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" Jan 23 23:57:54.559753 containerd[2180]: time="2026-01-23T23:57:54.558836949Z" level=info msg="Forcibly stopping sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\"" Jan 23 23:57:54.732678 kubelet[3714]: E0123 23:57:54.732279 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.674 [WARNING][6606] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e5b6c2c1-0276-4f5f-9587-f464f0aab16d", ResourceVersion:"1133", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 52, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"72048e168e68bb66e1325aadf86da79a489fd44d259776ae703b58374a78463e", Pod:"coredns-668d6bf9bc-9ddld", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0117be6fdcf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.674 [INFO][6606] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.674 [INFO][6606] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" iface="eth0" netns="" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.674 [INFO][6606] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.674 [INFO][6606] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.750 [INFO][6613] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.752 [INFO][6613] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.752 [INFO][6613] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.802 [WARNING][6613] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.803 [INFO][6613] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" HandleID="k8s-pod-network.0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Workload="ip--172--31--18--35-k8s-coredns--668d6bf9bc--9ddld-eth0" Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.813 [INFO][6613] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:54.832958 containerd[2180]: 2026-01-23 23:57:54.827 [INFO][6606] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495" Jan 23 23:57:54.836651 containerd[2180]: time="2026-01-23T23:57:54.832717726Z" level=info msg="TearDown network for sandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" successfully" Jan 23 23:57:54.843755 containerd[2180]: time="2026-01-23T23:57:54.843665782Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:54.843930 containerd[2180]: time="2026-01-23T23:57:54.843761410Z" level=info msg="RemovePodSandbox \"0216e97f5b3acb04adda25249894c68be1c9c1b8733cc37cd96f1a9589931495\" returns successfully" Jan 23 23:57:54.845999 containerd[2180]: time="2026-01-23T23:57:54.845886526Z" level=info msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.943 [WARNING][6627] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70864b69-f424-425f-943d-f03fcd5d49da", ResourceVersion:"1501", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a", Pod:"calico-apiserver-6d5497fbb7-xhxnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali045614582c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.946 [INFO][6627] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.946 [INFO][6627] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" iface="eth0" netns="" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.946 [INFO][6627] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.946 [INFO][6627] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.990 [INFO][6634] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.991 [INFO][6634] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:54.991 [INFO][6634] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:55.003 [WARNING][6634] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:55.003 [INFO][6634] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:55.006 [INFO][6634] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:55.016581 containerd[2180]: 2026-01-23 23:57:55.011 [INFO][6627] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.018248 containerd[2180]: time="2026-01-23T23:57:55.016639591Z" level=info msg="TearDown network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" successfully" Jan 23 23:57:55.018248 containerd[2180]: time="2026-01-23T23:57:55.016677967Z" level=info msg="StopPodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" returns successfully" Jan 23 23:57:55.019354 containerd[2180]: time="2026-01-23T23:57:55.019185043Z" level=info msg="RemovePodSandbox for \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" Jan 23 23:57:55.019354 containerd[2180]: time="2026-01-23T23:57:55.019241599Z" level=info msg="Forcibly stopping sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\"" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.107 [WARNING][6648] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0", GenerateName:"calico-apiserver-6d5497fbb7-", Namespace:"calico-apiserver", SelfLink:"", UID:"70864b69-f424-425f-943d-f03fcd5d49da", ResourceVersion:"1501", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6d5497fbb7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-35", ContainerID:"0160bb9bc105a6e811e93a5cb5b102f69bfb4ab78f43895b06c499bd1113211a", Pod:"calico-apiserver-6d5497fbb7-xhxnx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.59.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali045614582c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.108 [INFO][6648] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.108 [INFO][6648] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" iface="eth0" netns="" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.108 [INFO][6648] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.108 [INFO][6648] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.176 [INFO][6656] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.177 [INFO][6656] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.177 [INFO][6656] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.193 [WARNING][6656] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.193 [INFO][6656] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" HandleID="k8s-pod-network.0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Workload="ip--172--31--18--35-k8s-calico--apiserver--6d5497fbb7--xhxnx-eth0" Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.198 [INFO][6656] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:55.206507 containerd[2180]: 2026-01-23 23:57:55.201 [INFO][6648] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b" Jan 23 23:57:55.206507 containerd[2180]: time="2026-01-23T23:57:55.206252492Z" level=info msg="TearDown network for sandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" successfully" Jan 23 23:57:55.219022 containerd[2180]: time="2026-01-23T23:57:55.217440164Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:55.219022 containerd[2180]: time="2026-01-23T23:57:55.217564976Z" level=info msg="RemovePodSandbox \"0a6f90a7e1c8ffe54839b8913965552698cb5321afc3792ecb3b0915d1411a7b\" returns successfully" Jan 23 23:57:55.560344 systemd[1]: Started sshd@23-172.31.18.35:22-4.153.228.146:58082.service - OpenSSH per-connection server daemon (4.153.228.146:58082). Jan 23 23:57:56.091472 sshd[6662]: Accepted publickey for core from 4.153.228.146 port 58082 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:56.095745 sshd[6662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:56.116613 systemd-logind[2134]: New session 24 of user core. Jan 23 23:57:56.123814 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:57:56.764866 sshd[6662]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:56.772215 systemd[1]: sshd@23-172.31.18.35:22-4.153.228.146:58082.service: Deactivated successfully. Jan 23 23:57:56.788894 systemd-logind[2134]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:57:56.790483 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:57:56.801827 systemd-logind[2134]: Removed session 24. Jan 23 23:57:59.725439 kubelet[3714]: E0123 23:57:59.724787 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:58:01.857079 systemd[1]: Started sshd@24-172.31.18.35:22-4.153.228.146:58084.service - OpenSSH per-connection server daemon (4.153.228.146:58084). Jan 23 23:58:02.401069 sshd[6677]: Accepted publickey for core from 4.153.228.146 port 58084 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:58:02.405726 sshd[6677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:58:02.415256 systemd-logind[2134]: New session 25 of user core. Jan 23 23:58:02.426760 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:58:02.734811 kubelet[3714]: E0123 23:58:02.731772 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:58:02.961149 sshd[6677]: pam_unix(sshd:session): session closed for user core Jan 23 23:58:02.973240 systemd-logind[2134]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:58:02.977834 systemd[1]: sshd@24-172.31.18.35:22-4.153.228.146:58084.service: Deactivated successfully. Jan 23 23:58:02.990144 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:58:02.999128 systemd-logind[2134]: Removed session 25. Jan 23 23:58:04.726635 kubelet[3714]: E0123 23:58:04.723472 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:58:05.727899 kubelet[3714]: E0123 23:58:05.727819 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:58:07.724701 containerd[2180]: time="2026-01-23T23:58:07.724354846Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:58:08.133484 containerd[2180]: time="2026-01-23T23:58:08.133267340Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:08.135464 containerd[2180]: time="2026-01-23T23:58:08.135359420Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:58:08.135608 containerd[2180]: time="2026-01-23T23:58:08.135511928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:58:08.136067 kubelet[3714]: E0123 23:58:08.135747 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:08.136067 kubelet[3714]: E0123 23:58:08.135808 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:58:08.136067 kubelet[3714]: E0123 23:58:08.135976 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:be7b206d06df401fa8cc56417b3a1000,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:08.138416 containerd[2180]: time="2026-01-23T23:58:08.138325136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:58:08.408493 containerd[2180]: time="2026-01-23T23:58:08.407732589Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:08.410185 containerd[2180]: time="2026-01-23T23:58:08.410032689Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:58:08.410185 containerd[2180]: time="2026-01-23T23:58:08.410133297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:08.410555 kubelet[3714]: E0123 23:58:08.410335 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:08.410555 kubelet[3714]: E0123 23:58:08.410434 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:58:08.410742 kubelet[3714]: E0123 23:58:08.410587 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n8tjk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-5b885984c9-qcp7h_calico-system(78189c5a-8a21-4a26-9446-2683d6716342): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:08.411862 kubelet[3714]: E0123 23:58:08.411804 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:58:08.726338 kubelet[3714]: E0123 23:58:08.724620 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:58:11.723684 kubelet[3714]: E0123 23:58:11.723608 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:58:16.725342 containerd[2180]: time="2026-01-23T23:58:16.725287855Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:58:16.902488 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1-rootfs.mount: Deactivated successfully. Jan 23 23:58:16.907879 containerd[2180]: time="2026-01-23T23:58:16.907797008Z" level=info msg="shim disconnected" id=399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1 namespace=k8s.io Jan 23 23:58:16.908212 containerd[2180]: time="2026-01-23T23:58:16.908143220Z" level=warning msg="cleaning up after shim disconnected" id=399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1 namespace=k8s.io Jan 23 23:58:16.908212 containerd[2180]: time="2026-01-23T23:58:16.908173208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:17.007789 containerd[2180]: time="2026-01-23T23:58:17.007576444Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:17.009948 containerd[2180]: time="2026-01-23T23:58:17.009838228Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:58:17.010093 containerd[2180]: time="2026-01-23T23:58:17.009971620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:58:17.010366 kubelet[3714]: E0123 23:58:17.010311 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:17.011340 kubelet[3714]: E0123 23:58:17.010375 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:58:17.011340 kubelet[3714]: E0123 23:58:17.010674 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:17.011748 containerd[2180]: time="2026-01-23T23:58:17.011690152Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:58:17.289269 containerd[2180]: time="2026-01-23T23:58:17.288591186Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:17.290916 containerd[2180]: time="2026-01-23T23:58:17.290782686Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:58:17.290916 containerd[2180]: time="2026-01-23T23:58:17.290860398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:17.291157 kubelet[3714]: E0123 23:58:17.291060 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:17.291157 kubelet[3714]: E0123 23:58:17.291121 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:58:17.292104 kubelet[3714]: E0123 23:58:17.291470 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tdmj6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-qtwrz_calico-system(062e26d7-bfb2-4194-8340-6fddf424a2ce): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:17.292445 containerd[2180]: time="2026-01-23T23:58:17.291727542Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:58:17.293343 kubelet[3714]: E0123 23:58:17.293276 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:58:17.547267 containerd[2180]: time="2026-01-23T23:58:17.546553507Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:17.553778 containerd[2180]: time="2026-01-23T23:58:17.553666735Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:58:17.554067 containerd[2180]: time="2026-01-23T23:58:17.553875931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:58:17.554686 kubelet[3714]: E0123 23:58:17.554300 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:17.554686 kubelet[3714]: E0123 23:58:17.554382 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:58:17.554686 kubelet[3714]: E0123 23:58:17.554572 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vpgpx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-md8cr_calico-system(ef69f672-ed17-43f4-a4a8-8456f661673c): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:17.555936 kubelet[3714]: E0123 23:58:17.555827 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:58:17.723994 containerd[2180]: time="2026-01-23T23:58:17.723940820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:17.913171 kubelet[3714]: I0123 23:58:17.913111 3714 scope.go:117] "RemoveContainer" containerID="399d20a47d07d429df761d1e1c518748613b01ec1b7a89ca23e0d632416c33c1" Jan 23 23:58:17.918167 containerd[2180]: time="2026-01-23T23:58:17.917915913Z" level=info msg="CreateContainer within sandbox \"ba031e17d22029930748fc0e8ac7a1331195fe0b0fb746aee31fed550176ff9c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:58:17.943289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2162458675.mount: Deactivated successfully. Jan 23 23:58:17.948149 containerd[2180]: time="2026-01-23T23:58:17.948085833Z" level=info msg="CreateContainer within sandbox \"ba031e17d22029930748fc0e8ac7a1331195fe0b0fb746aee31fed550176ff9c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"406079ffd25102186b6a4ad2d0905b3305a865837bc1519cf4f621c05c9f551a\"" Jan 23 23:58:17.951466 containerd[2180]: time="2026-01-23T23:58:17.949556781Z" level=info msg="StartContainer for \"406079ffd25102186b6a4ad2d0905b3305a865837bc1519cf4f621c05c9f551a\"" Jan 23 23:58:18.009906 containerd[2180]: time="2026-01-23T23:58:18.009832205Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:18.012742 containerd[2180]: time="2026-01-23T23:58:18.012573293Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:18.013121 containerd[2180]: time="2026-01-23T23:58:18.012655445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:18.013525 kubelet[3714]: E0123 23:58:18.013456 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:18.014140 kubelet[3714]: E0123 23:58:18.013524 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:18.014140 kubelet[3714]: E0123 23:58:18.013734 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7lv6v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-xhxnx_calico-apiserver(70864b69-f424-425f-943d-f03fcd5d49da): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:18.015650 kubelet[3714]: E0123 23:58:18.015597 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:58:18.081934 containerd[2180]: time="2026-01-23T23:58:18.081852689Z" level=info msg="StartContainer for \"406079ffd25102186b6a4ad2d0905b3305a865837bc1519cf4f621c05c9f551a\" returns successfully" Jan 23 23:58:18.545723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c-rootfs.mount: Deactivated successfully. Jan 23 23:58:18.560776 containerd[2180]: time="2026-01-23T23:58:18.560413532Z" level=info msg="shim disconnected" id=3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c namespace=k8s.io Jan 23 23:58:18.560776 containerd[2180]: time="2026-01-23T23:58:18.560501732Z" level=warning msg="cleaning up after shim disconnected" id=3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c namespace=k8s.io Jan 23 23:58:18.560776 containerd[2180]: time="2026-01-23T23:58:18.560539100Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:18.920517 kubelet[3714]: I0123 23:58:18.920287 3714 scope.go:117] "RemoveContainer" containerID="3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c" Jan 23 23:58:18.926425 containerd[2180]: time="2026-01-23T23:58:18.926100694Z" level=info msg="CreateContainer within sandbox \"cca128d04984ed3a947f0a961292beb96882dc02f565dbf373501cc631e26fb7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 23:58:18.978000 containerd[2180]: time="2026-01-23T23:58:18.975538642Z" level=info msg="CreateContainer within sandbox \"cca128d04984ed3a947f0a961292beb96882dc02f565dbf373501cc631e26fb7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a\"" Jan 23 23:58:18.978000 containerd[2180]: time="2026-01-23T23:58:18.976833586Z" level=info msg="StartContainer for \"5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a\"" Jan 23 23:58:19.075222 systemd[1]: run-containerd-runc-k8s.io-5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a-runc.MnMImq.mount: Deactivated successfully. Jan 23 23:58:19.155123 containerd[2180]: time="2026-01-23T23:58:19.155052463Z" level=info msg="StartContainer for \"5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a\" returns successfully" Jan 23 23:58:20.726443 kubelet[3714]: E0123 23:58:20.724061 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:58:21.367569 kubelet[3714]: E0123 23:58:21.367196 3714 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:58:22.259091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1-rootfs.mount: Deactivated successfully. Jan 23 23:58:22.274340 containerd[2180]: time="2026-01-23T23:58:22.274006954Z" level=info msg="shim disconnected" id=9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1 namespace=k8s.io Jan 23 23:58:22.275089 containerd[2180]: time="2026-01-23T23:58:22.274453462Z" level=warning msg="cleaning up after shim disconnected" id=9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1 namespace=k8s.io Jan 23 23:58:22.275089 containerd[2180]: time="2026-01-23T23:58:22.274480066Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:22.724209 containerd[2180]: time="2026-01-23T23:58:22.723933313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:58:22.945560 kubelet[3714]: I0123 23:58:22.945525 3714 scope.go:117] "RemoveContainer" containerID="9fd4ba98232d13b94b27a98742138af4b0d97a786a8e0fe4347cdb30891a40e1" Jan 23 23:58:22.950379 containerd[2180]: time="2026-01-23T23:58:22.949650566Z" level=info msg="CreateContainer within sandbox \"aa1119018954e4548032be63f9b8e753e8f0c83ea2168039b85a73ffcf64de91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:58:22.979515 containerd[2180]: time="2026-01-23T23:58:22.978527246Z" level=info msg="CreateContainer within sandbox \"aa1119018954e4548032be63f9b8e753e8f0c83ea2168039b85a73ffcf64de91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b257d1a4bdeae94e26f67c1366ee54b9078e587af03ba09188e3770b6479f3fb\"" Jan 23 23:58:22.979515 containerd[2180]: time="2026-01-23T23:58:22.979225178Z" level=info msg="StartContainer for \"b257d1a4bdeae94e26f67c1366ee54b9078e587af03ba09188e3770b6479f3fb\"" Jan 23 23:58:23.014454 containerd[2180]: time="2026-01-23T23:58:23.013120282Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:23.019912 containerd[2180]: time="2026-01-23T23:58:23.018337810Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:58:23.024142 containerd[2180]: time="2026-01-23T23:58:23.024065386Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:58:23.025527 kubelet[3714]: E0123 23:58:23.024881 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:23.025527 kubelet[3714]: E0123 23:58:23.024946 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:58:23.025527 kubelet[3714]: E0123 23:58:23.025119 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tqd4z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-6d5497fbb7-6rwm7_calico-apiserver(e511819f-7fe1-47d1-b5b7-5258bf08f097): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:23.027691 kubelet[3714]: E0123 23:58:23.027067 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:58:23.104472 containerd[2180]: time="2026-01-23T23:58:23.104238154Z" level=info msg="StartContainer for \"b257d1a4bdeae94e26f67c1366ee54b9078e587af03ba09188e3770b6479f3fb\" returns successfully" Jan 23 23:58:24.726209 containerd[2180]: time="2026-01-23T23:58:24.725906234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:58:24.997800 containerd[2180]: time="2026-01-23T23:58:24.997454872Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:58:25.000057 containerd[2180]: time="2026-01-23T23:58:24.999722740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:58:25.000057 containerd[2180]: time="2026-01-23T23:58:24.999784456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:58:25.000758 kubelet[3714]: E0123 23:58:25.000495 3714 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:25.000758 kubelet[3714]: E0123 23:58:25.000578 3714 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:58:25.001741 kubelet[3714]: E0123 23:58:25.001477 3714 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7cbxv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-6677f6f656-js6vm_calico-system(9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:58:25.002760 kubelet[3714]: E0123 23:58:25.002697 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2" Jan 23 23:58:30.675070 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a-rootfs.mount: Deactivated successfully. Jan 23 23:58:30.679375 containerd[2180]: time="2026-01-23T23:58:30.679287944Z" level=info msg="shim disconnected" id=5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a namespace=k8s.io Jan 23 23:58:30.679375 containerd[2180]: time="2026-01-23T23:58:30.679363832Z" level=warning msg="cleaning up after shim disconnected" id=5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a namespace=k8s.io Jan 23 23:58:30.680081 containerd[2180]: time="2026-01-23T23:58:30.679409156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:58:30.724699 kubelet[3714]: E0123 23:58:30.724609 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-md8cr" podUID="ef69f672-ed17-43f4-a4a8-8456f661673c" Jan 23 23:58:30.726836 kubelet[3714]: E0123 23:58:30.724755 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-qtwrz" podUID="062e26d7-bfb2-4194-8340-6fddf424a2ce" Jan 23 23:58:30.726836 kubelet[3714]: E0123 23:58:30.726767 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-xhxnx" podUID="70864b69-f424-425f-943d-f03fcd5d49da" Jan 23 23:58:30.974625 kubelet[3714]: I0123 23:58:30.974152 3714 scope.go:117] "RemoveContainer" containerID="3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c" Jan 23 23:58:30.974625 kubelet[3714]: I0123 23:58:30.974608 3714 scope.go:117] "RemoveContainer" containerID="5e2a12ef76ff16d7739ab6a1b92e6ee2a3765134dd57fe51bfbf99b91facd70a" Jan 23 23:58:30.975116 kubelet[3714]: E0123 23:58:30.974850 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-ztdlc_tigera-operator(eaaa8e7e-357b-4899-ab59-c941d4d76757)\"" pod="tigera-operator/tigera-operator-7dcd859c48-ztdlc" podUID="eaaa8e7e-357b-4899-ab59-c941d4d76757" Jan 23 23:58:30.977805 containerd[2180]: time="2026-01-23T23:58:30.977751814Z" level=info msg="RemoveContainer for \"3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c\"" Jan 23 23:58:30.984342 containerd[2180]: time="2026-01-23T23:58:30.984256306Z" level=info msg="RemoveContainer for \"3c81cd9e23c6632547c173c590210d27e0fe04a00ddfcba44bada0d3cad7983c\" returns successfully" Jan 23 23:58:31.368996 kubelet[3714]: E0123 23:58:31.368448 3714 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.35:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-35?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:58:35.723576 kubelet[3714]: E0123 23:58:35.723452 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5b885984c9-qcp7h" podUID="78189c5a-8a21-4a26-9446-2683d6716342" Jan 23 23:58:36.724090 kubelet[3714]: E0123 23:58:36.723852 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-6d5497fbb7-6rwm7" podUID="e511819f-7fe1-47d1-b5b7-5258bf08f097" Jan 23 23:58:36.724090 kubelet[3714]: E0123 23:58:36.723989 3714 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6677f6f656-js6vm" podUID="9f6c0fa0-fa2c-4b12-819f-ba0af7abf4d2"